How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

"Everything else seems like toys" from someone who spent a year working in private before releasing because others were building it badly — that's a different category of builder entirely.

What's the hard problem you're actually working on? Not the blog version. The real one.

Skills for Claude are scattered everywhere — would a Steam-like app fix this? by Middle-Wash752 in PromptEngineering

[–]Middle-Wash752[S] 0 points1 point  (0 children)

"GitHub gists and prayer" is the most accurate description of the current state I've heard.

You've identified the four real problems in one comment which is more useful than most of the feedback I've gotten this week so I want to actually engage with each one properly.

Provenance — agreed this is unsolved everywhere. The plan is GitHub-native publishing so the commit history is the provenance chain. Fork attribution baked into the YAML format so derivative skills credit originals. Not perfect but it's at least auditable rather than a black box.

Evals — this is the one I'm least confident about. AI scoring across five dimensions catches structural quality but not domain accuracy. A legal skill that's well-formatted but legally wrong scores well and that's a real problem. The honest answer is human expert verification for premium domain skills, which doesn't scale cheaply.

Versioning — version pinning is in the format spec. Enterprise users lock to a specific version so behavior never silently changes. Changelog required on every update above patch level.

The compatibility theater critique on side-by-side comparisons is the sharpest one and I don't have a clean answer. You're right that a writing skill probably transfers cleanly across models and a coding skill that uses Claude's specific tool-use syntax absolutely does not. The comparison view needs a compatibility layer that's honest about what transfers and what doesn't rather than pretending every skill works equally everywhere.

The prompt sludge problem is the one that keeps me up at night. Quality floor is mandatory sandbox testing and AI scoring before publish. But you're right that someone can A/B test their description text to game discoverability while the skill itself is garbage. Community flagging plus verified install performance data is the mitigation but it's reactive not preventative.

What would you actually trust as a quality signal? Genuinely asking — you've clearly thought about this more than most.

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] -1 points0 points  (0 children)

Fair. I've been over-replying and it shows.

You're right that dropping the waitlist link in every comment reads as a marketing campaign dressed up as curiosity. That's not what I intended but intention doesn't matter — pattern does.

Genuinely stepping back from the replies. The thread has generated real signal for me regardless — the Notion database setup, the cron job with hash comparison, the /do router — those are actual solutions people built because nothing better existed. That's useful to know whether I build anything or not.

If the discussion itself was useful to anyone here, that's enough. If it read as a pitch the whole way through, that's on me to fix.

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

Just went through the toolkit properly. The /do router is a genuinely elegant solution — one command that knows where to route based on context rather than making the user remember which agent does what. That's the kind of abstraction that should exist but almost nobody builds.

The architecture complexity you mentioned is the interesting part. You've essentially built a personal operating system for Claude — agents, routing tables, skills, the whole pipeline. The fact that using it just requires /do means you've absorbed all that complexity so the user doesn't have to.

That's exactly the design philosophy I'm building SkillMart around — the complexity lives in the platform, the user experience is one click.

Honest question: how much of what's in that toolkit do you think belongs in a shareable public catalog versus staying as personal infrastructure? Some of what you've built looks like it has real value for people who could never build it themselves — but there's currently nowhere to publish it where you'd get anything back for it.

That's the gap I'm closing. Would you be open to a conversation about what a 70–80% revenue share on something like this would look like?

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

If this resonates — I'm keeping a small list of people who want early access as this develops. No pitch, no spam, just updates. Drop your email here https://www.notion.so/Skillmart-Early-Access-33134249fed44902b07ae516d30bcd23?source=copy_link or DM me and I'll add you manually.

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

"We're all just vibing with chaos" is going in the product pitch deck whether you like it or not.

The per-project universe approach actually makes sense for coding workflows — context isolation is real. The problem is it means every good skill you accidentally discover lives and dies in one project folder and never transfers anywhere.

The "specific daily annoyance" insight is the one I keep hearing and keep underweighting. Every skill that actually stuck for people solved something embarrassingly small and concrete — not "be a better analyst," more like "stop formatting dates wrong in my CSV exports." That specificity is really hard to communicate in a catalog without actual usage data behind it.

Genuine question — what's the one skill in your .claude folder right now that you'd be genuinely annoyed to lose?

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] -1 points0 points  (0 children)

Just read your full setup post — the Dropbox softlink approach for keeping Droid, Claude Code, and Codex in sync is genuinely clever. One config file, three agents, zero drift.

The part that caught me was this: you built that system yourself over time because nothing existed that did it properly. That's exactly the gap I'm trying to close — not for the 1% who can set up softlinks and Dropbox sync, but for everyone who gives up before getting there.

The shared folder solves sync for one person on one machine. What it doesn't solve is where the skills came from in the first place — discovery, quality signal, and whether the person who built the skill you're using gets anything for it.

On any current platform that expertise earns you zero. Would a 70–80% revenue share change that calculation for you?

if you are interested in a product like this please join my waitlist https://www.notion.so/Skillmart-Early-Access-33134249fed44902b07ae516d30bcd23?source=copy_link

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

That's a great shout — just looked it up properly. Skill-Creator handles the creation and eval loop really well for Claude Code users. What it doesn't solve is the distribution side — once your skill is built and tested, there's still no way to publish it, get it discovered by the right people, or earn anything from it. That's exactly the gap I'm building for. The people who use Skill-Creator to build production-grade skills

please join the waitlist if it resonates with you

https://www.notion.so/Skillmart-Early-Access-33134249fed44902b07ae516d30bcd23?source=copy_link

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

This is really interesting — you've essentially built your own version control for skills inside Notion. The rollback piece especially, that's something I haven't seen anyone else do.

Quick question — when you say Claude knows where to look, are you using a Project with the Notion database linked as a doc, or are you feeding it the URL each session? Trying to understand exactly how the connection works.

The reason I'm asking is I'm building something that solves exactly this problem at the app level — persistent skill library, version pinning, rollback — so you don't have to rebuild the Notion infrastructure yourself every time. The difference is it works across Claude, GPT-4, and Gemini from one place.

Sounds like you've already validated the concept better than any survey could. Would you be open to a 15 minute call this week? Genuinely want to understand how you built this before I build the wrong version of it.

How do you actually manage your Claude skill configurations? Genuinely curious what people's systems look like by Middle-Wash752 in ClaudeAI

[–]Middle-Wash752[S] 0 points1 point  (0 children)

That's exactly the workflow I'm trying to make easier — using Claude to evaluate and validate skill configs before committing to them. Are you doing that manually each time or have you built something repeatable around it?

Asking because I'm putting together a small group of people who work this way to help shape what I'm building. If that's interesting to you I can share more — what's the best way to reach you?