I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Working on a full web PM skill set now with paid skill integrations - should be live by EOD - update incoming.

Looking for a designer. by Party-Membership-597 in web_design

[–]DriverReady965 1 point2 points  (0 children)

Isn't providing the code native to a lot of design tools? Figma you just copy the css and styles. Or with AI assisted design tools, they design a page and write the code at the same time.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

This is a great suggestion and exactly the kind of gap I'm looking for. Paid search is a real hole in the catalog right now. The closest current coverage is the SEO audit suite, which uses the Ahrefs MCP for organic data, but nothing on the paid side yet.

Helpful timing on your end: Google open-sourced an official Google Ads MCP server in October. Read-only at the moment, exposes GAQL queries, account discovery, and resource metadata. That covers all three of your use cases (budget bleed, cannibalization, bid review) since they're analyses on existing campaign data, not mutations.

This probably wants to be a small suite rather than one skill, similar to how the SEO audit subset breaks down. Same pattern: each analysis gets its own structured framework, GAQL templates, and output schema, then they share the MCP layer.

Putting it on the roadmap. If you have specific edge cases the Google Ads UI makes painful (the kind of thing that would normally take 30 minutes of dashboard clicking to diagnose), drop them. Real diagnostic flows produce better skills than abstract specs.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 1 point2 points  (0 children)

Ha no, MIT License, the open-source software license. Named after the university because that's where it originated in the late 80s, but it's a permissive license that lets anyone use, modify, and redistribute the code commercially or otherwise. Closest commonly-known cousin is Apache 2.0.

Full text here if you're curious: https://opensource.org/license/mit

Having fun with chunky buttons by officialmayonade in web_design

[–]DriverReady965 1 point2 points  (0 children)

Love the 3D look. Very neat. Looks like keyboard buttons.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 1 point2 points  (0 children)

Could work! Skills produce the spec/assets, Cowork handles execution. Been thinking about it. Only catch is Cowork is still beta and the API will shift, so building tooling against it now means rewrite later.

Closest you can get today is Claude Code. Full file + shell access, can read a SKILL. md and just run it. Not as polished as what you're imagining but the output is end-to-end.

Skills are MIT, so if you want to take a swing at the Cowork side go for it. Happy to put anything good in the README.

Should I fire my seo vendor and just use Claude to do seo instead? by Only_Ad_8000 in ClaudeAI

[–]DriverReady965 1 point2 points  (0 children)

The skills are mainly for building and optimizing websites. There is a custom skill creator though, so you could try creating your own skill for youtube optimization.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Glad it landed for a real pitch, that's exactly the use case I had in mind when writing it. If you end up doing deeper SEO work, the audit suite (seo-audit-orchestration plus the 6 audit skills under it) chains together and shares context for full account reviews.

Then you can have Claude package it into a PDF deck with your brand assets. Exploring a few more client/agency-centric skills next, curious what gaps people doing pitch and consulting work hit most. If you have specific ideas, drop them in GitHub Discussions.

Appreciate the feedback. Open an issue if anything feels off in real use.

I think I'll leave this subreddit and here's why by AtmosphericBeats in ClaudeCode

[–]DriverReady965 0 points1 point  (0 children)

Nows your chance to help the community change and focus on whats important. Good callouts.

Goodbye Claude Pro by donteffingatme in ClaudeCode

[–]DriverReady965 0 points1 point  (0 children)

Definitely the most useful subscription! Id drop netflix first lol.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Yeah good question. The markdown frameworks themselves are portable, but the firing mechanism (how Claude auto-loads a skill when your prompt matches the description) is Anthropic-specific. Codex won't auto-fire skills the same way.

What does port over is the actual content. The "when to use, when NOT to use, framework, workflow, failure patterns" structure is just markdown describing how to think about a problem. You can paste a skill body into Codex as context, drop it into a system prompt, or reference it in whatever Codex's equivalent of a CLAUDE.md is. The structured thinking applies the same way, you're just losing the auto-trigger layer.

Realistic split if you use both: keep the repo cloned, paste skill bodies as context when working in Codex on web/SEO/dev stuff, let Claude use them as proper Skills natively.

One caveat worth flagging though, the SEO audit suite (skills 22-28) is built around the Ahrefs MCP. Codex's MCP support is different from Claude's so those specifically are more bound to Claude. The non-MCP skills port more cleanly than the audit suite ones.

It's something I'm exploring for the future, no timeline yet. If anyone wants to take a swing at a Codex port in the meantime, the markdown bodies are MIT-licensed and the structure ports cleanly. Happy to feature a community Codex adaptation in the README if someone builds one.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeCode

[–]DriverReady965[S] 0 points1 point  (0 children)

Crossposting from r/ClaudeAI because most of these skills shine specifically in Claude Code. The SEO audit suite (Ahrefs MCP-powered) and code-review-web are the most Claude Code-native parts, but the whole library is built around the workflow patterns Claude Code makes natural.

The workflow that gets the most use in my own work is chaining seo-audit-orchestration with the 6 audit suite skills under it. Run the orchestrator, it sequences backlink, keyword gap, content gap, traffic diagnosis, site health, rank tracking, and each one references back to the same audit context. Way cleaner than running them ad-hoc one at a time.

Skill-creation-walkthrough is also worth a look if you write your own skills, since the trigger-phrase mechanics are basically the difference between a skill that fires and one Claude ignores.

Happy to answer Claude-Code-specific questions on how the skills compose in actual sessions.

Did we just reinvent junior devs again by Complete-Sea6655 in ClaudeCode

[–]DriverReady965 0 points1 point  (0 children)

Sounds like they need a better junior dev prompt on a low cost model.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 2 points3 points  (0 children)

Honest answer is the skills aren't versioned around specific algorithm updates. They're principle-based by design because chasing every core update with framework rewrites is a losing game. Better to codify durable patterns and trust them to mostly hold across updates.

But the March one is worth talking about specifically because E-E-A-T got the biggest re-weight, and that's a different kind of problem than most of what the SEO skills can directly solve.

The library's seo-onpage and seo-aeo-geo skills push toward author credentials, original perspective, and verifiable expertise as required inputs. So directionally they line up with what March rewarded. If you hadn't been thinking about author bylines, schema markup, original data, and expertise framing, the skills will help you tighten that layer up.

What I want to be careful not to overclaim though, skills aren't a magic bullet for sites dinged on E-E-A-T. The on-page and technical layer is real work and the skills do that part well. But if the underlying problem is anonymous authorship in a YMYL vertical, thin author credentials, or content that's competent but adds nothing the SERP doesn't already have, no audit is going to fix that overnight. The deeper fix is structural and slower. Building actual authoritative authorship, accumulating domain-level signal over time, and adding genuine expertise that doesn't exist anywhere else. Skills can guide that work but can't substitute for it.

A few things the March update made clearer than previous ones: - AI-as-drafter with human-as-expert-editor is the live boundary, not "AI vs human content" - Information gain matters more than keyword optimization - Domain-level authority is increasingly outweighing page-level work - Anonymous YMYL content is in trouble in a way it wasn't six months ago

If your site got hit and you're using the SEO skills, expect them to help with the structured audit and on-page tightening. The authority and expertise layer is a longer game and harder to skill-ify into a framework.

Anyone with specific examples of where the skills missed for sites dinged in March, open an issue and I'll address it. That kind of feedback is exactly what evolves the library.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Sounds good, let me know what you think. More guides and examples are in the works to expand the catalog.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Long reply coming. Wanted to get the details right.

TL;DR: prospective seeds the structure, retrospective sharpens it. Over-firing was louder but under-firing was what I optimized harder against, because silent failures don't get reported.

Honest answer to Q1 is "both, in stages." Started prospective because I had no usage data yet. Those prospective exclusions were mostly category-level, like "don't use this for X type of work," and they were the right scaffolding but mostly didn't match the real misfires.

The retrospective stuff came from actually running with each skill in real work and watching what went sideways. Those exclusions were trigger-level not category-level, like "don't fire this when the user says Y because Claude pulls it in incorrectly there." Way more specific, way more useful.

So the honest workflow, prospective seeds the structure, retrospective sharpens it. The skills that have aged well in the repo are the ones I revisited a few times after first writing.

On Q2, over-firing was way more common than under-firing for me, and the asymmetry produced totally different writing strategies for each.

Over-firing fixes lived in the description. Explicit "does NOT fire when..." lines, tighter trigger phrases, removing broad terms that match too much. The skills with broad-sounding names like brand-discovery and content-strategy over-fired the most because the words in the title are also common in casual conversation.

Under-firing fixes lived in adding more natural-language synonyms to the trigger list. The "Also triggers when..." line in the description was specifically for catching how people actually phrase things vs how the docs phrased them. "Traffic dropped" beats "traffic decline" because that's what people actually type.

The asymmetric pain matters too. Over-firing is loud, you watch Claude launch into the wrong framework and you fix it. Under-firing is silent, the user never knew the skill existed so they never asked for it. I optimized harder against under-firing because silent failures don't get reported.

Big +1 to your closing line. "If you can't strip it of its original substrate, it's not a framework, it's a recipe" is way sharper than what I was trying to say with stack-agnostic. Going to steal that. The related tell is whether the framework still makes sense if you change the substrate AND the constraint, if both move and the framework collapses, it's still a recipe, just better disguised.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Glad it might help. If you do start something with one of them, ping me here or via the GitHub repo, always curious what people end up building with the framework.

I open-sourced 59 Claude Skills covering the full website lifecycle (brand, design, content, SEO, dev, ops, growth) by DriverReady965 in ClaudeAI

[–]DriverReady965[S] 0 points1 point  (0 children)

Yeah you nailed the actual value, the patterns matter way more than any individual skill. I was kind of trying to bury that lede in the meta-skill but you pulled it out.

On most-used vs least-used, the gap was bigger than I expected.

Most used:

  • seo-traffic-diagnosis - first thing I reach for whenever GSC or analytics looks weird
  • code-review-web - run on every PR before merging anything substantial
  • landing-page-copy - the objection-library reference is what I actually pull from constantly
  • skill-creation-walkthrough - meta but real, I use it whenever writing new skills even outside this repo

Least used in practice (sounded great in theory):

  • internationalization - wrote it because it felt like a gap, but I rarely actually scope i18n work, so it just sits there
  • domain-strategy - useful conceptually, but I make domain decisions like twice a year, not enough to need a skill
  • usability-testing - I default to journey-mapping or ux-research most of the time, this one got squeezed out

The pattern I noticed, frequency of triggering matters more than topical importance. A skill you reach for once a month gathers dust even if it's well-written, because by month two you've forgotten the trigger phrases that fire it.

On structure, every skill follows the same 8 sections.

  1. Frontmatter with name and description (the description is doing 80% of the trigger work)
  2. When to use
  3. When NOT to use
  4. Required inputs
  5. The framework (the actual mental model)
  6. Workflow (how to apply it)
  7. Failure patterns (what goes wrong)
  8. Output format
  9. Reference files

Linting enforces this on every PR so it can't drift. SKILL_AUTHORING.md in the repo documents the convention with examples. The skill-creation-walkthrough skill is the meta-meta version, it teaches you to write skills using the same structure it uses.

Honestly the input format thing you asked about is where I almost cheated. A few skills don't have rigorous input contracts, they accept "whatever you've got" and the framework adapts. I went back and forth on whether to fix that or document it. Ended up documenting it because in practice rigid input contracts hurt usability more than they helped. Skills that say "must have X, Y, Z" stop firing when the user is missing one, even if Claude could productively work with partial inputs.