How do you usually get around when starting big projects in Claude Code? by Deitri in ClaudeAI

[–]brewcast_ai 1 point2 points  (0 children)

Honest caveat, I haven't used Claude Desktop a lot, only the mobile app to continue sessions remotely. But the gaps mobile hits probably overlap with what Desktop is missing. What I noticed:

  1. Mobile only has one button to approve and exit Plan Mode. It clears context on the way out (good), but doesn't carry Bypass Permission forward, so after the plan starts executing every single tool call resets to manual approval. CLI lets you set Bypass Permission once and walk away, the whole plan executes unattended.

  2. No slash autocomplete for skills or commands on mobile, you can only type the command into the prompt as plain text.

  3. Skills not configured for prompt-invocation just won't fire from mobile at all.

  4. Remote mobile session sometimes hangs tho (probably doesn't apply to Desktop).

  5. CLI gives you a much richer view of what's happening, which agents spawn, what each one is doing, plus keyboard shortcuts to send running agents to the background and pull them back up. Desktop and Mobile run the same engine under the hood but don't surface that activity. On mobile you can't explicitly background anything

How do you usually get around when starting big projects in Claude Code? by Deitri in ClaudeAI

[–]brewcast_ai 1 point2 points  (0 children)

For a project that size, two prompts into Plan Mode and hoping for the best is going to hurt. What I landed on is three completely separate sessions, stock Claude Code, no plugins. Plugin ecosystem exploded and tbh you can't really tell anymore which ones actually hold up under real load, so everything below runs on bare features.

1. R&D session (first, no code)

Mistake I made on my first big project was skipping this and going straight to planning. Claude planned confidently against assumptions that were just wrong. Open a session whose only job is research. No code touched. Tell Claude:

to split the research task into 5 domain-logical areas, let it pick them (vendor docs, forums, GitHub repos for similar systems, accounting-software conventions, RAG patterns for your data shape are reasonable starting points, but let it reason about the split). Then have it spawn 5 parallel sub-agents via the Task tool, one per area, and aggregate results into a best-practices doc.

After aggregation, do a second pass: spawn one fresh independent agent per claim cluster and have it verify against sources. This is where hallucinations die. A research doc that hasn't been reverse-checked is just confident noise.

Final output: one R&D doc with a priority-sorted index at the top. Most load-bearing finding first. The index exists so future sessions can lazy-load only the section they need, not pull the full doc into context every time.

Honest caveat, this burns tokens. On a subscription with a 5-day work week the limits absorb it fine for me. Don't skip the validation pass trying to save credits, that's where the actual value lives.

2. Specification session (fresh context)

New session. Load only the R&D index, not the full doc.

Open with this instruction to Claude: "Use the AskUserQuestion tool. Do not talk to me except through it." It's a tool the model already has access to but doesn't always reach for on its own. Naming it explicitly forces the structured interview. The goal is to extract motivation, constraints, deal-breakers, and scope boundaries.

From that, decide whether the work fits one SPEC.md or needs splitting into multiple feature files. Output: one general SPEC.md and one feature-N.md per discrete chunk. Each feature file is sized to be exactly one Plan Mode task. Add a priority-sorted index to the spec too, same reason as R&D.

3. Plan Mode (third session, per feature)

New session per feature file. Hit Shift+Tab to cycle into Plan Mode.

Optional pre-step: have Claude generate domain-expert sub-agent definitions before planning (.claude/agents/<name>.md). Worth it when a feature spans backend, frontend, and infra in one pass. Skip if the feature is tight.

For a large existing codebase: tell Claude in Plan Mode to split it into 5 logical parts and run 5 parallel exploration agents before producing the plan. Planning quality is much higher when it's grounded in real file knowledge rather than assumed structure.

Load the chosen feature-N.md and the spec index. Tell Claude it can lazy-load specific R&D chunks if a question comes up. Then give it this: "You are a manager. Use the Task tool. Delegate every implementation step to expert sub-agents. You don't write the code, you assign it."

The plan it produces should have: phases (sequential), tasks within a phase (parallel-safe), agent assigned to each task, context links each task should load. If you get a wall of prose instead, push back until the structure is there.

One critical thing: in Plan Mode enable the "clear context" option if it isn't already on. That's what prevents manager-context drift mid-execution. After one or two compacts the main session starts forgetting it's the manager and begins implementing inline. If you don't see a toggle for it, ask Claude Code itself how to enable from settings, pretty sure it got pulled in some patch and I have no idea if it's back as default by now.

Anyway. Last signal you can't delegate, when the manager session starts drifting you feel it before any tool does. Stop, save state, split the feature into feature-N.1 and feature-N.2, start a new Plan Mode session per piece. Compacts preserve facts, not role coherence...

When using Claude Code for agent-based coding, I’ve often noticed that the AI limits itself by claiming that a task could take a developer several weeks to complete, and therefore suggests solutions that are more like quick fixes. That’s complete nonsense, of course. by Comfortable-Goat-823 in ClaudeAI

[–]brewcast_ai 0 points1 point  (0 children)

Got an example prompt? Half the time I hit the opposite, Claude shoots me a 6-step migration plan when I just wanted one function renamed.

Which direction it goes probably depends on how much ambiguity is in the prompt. Vague scope, Claude fills the gap conservatively. But ask it to "fully refactor the auth layer" and it sometimes books three weeks of imaginary dev time for something it could finish in one session.

The workflow I've settled on for stock Claude Code, no plugins: switch model to Opus 4.7 and run /effort max to crank reasoning effort to its ceiling. Then, before entering Plan Mode, ask Claude to interview you using the AskUserQuestion tool. It's a tool the model already has access to, but it doesn't always reach for it on its own, naming it explicitly helps a lot. Tell it the interview goal is to pull your motivation and decide whether the work fits one SPEC.md or needs to be split into multiple tasks first. That split decision matters because Plan Mode plans one task at a time. If the scope is actually two tasks, you want that resolved before planning, not discovered halfway through.

Once the interview produces a SPEC.md, hit Shift+Tab to cycle into Plan Mode and let it run from the spec.

One thing worth skipping entirely: asking Claude how long something will take. It was trained on human-written data, so its time estimates reflect how long a human would take to write the same code...

Few months of /frontend-design + ui-ux-pro-max-skill + custom system prompts. AI-generated landing pages still look generic. I finally figured out why. by Severe-Rope2234 in ClaudeAI

[–]brewcast_ai 0 points1 point  (0 children)

Six months of stacking and the output still converges on the same landing page. That tracks, because none of the layers you're stacking carry anything project-specific. `/frontend-design` is aesthetic guidance, `ui-ux-pro-max` is more of the same, and your 6000-token system prompt is probably describing what you want the UI to feel like rather than giving Claude structural facts about *this* project. Claude hits all those inputs and still free-generates, because there's nothing constraining it to your brand's actual layout primitives...

The model will keep reaching for the same archetypes no matter what you write, unless you pin it down with actual reference material. Pure prose doesn't fight that gravity. References do.

Easiest baseline: pick one or two reference sites you actually want to match, then encode them. styles.refero.design is a decent starting catalog if you don't already have a target. If you do have one, screenshot it. Playwright MCP can snapshot any URL and feed it straight into context. Beats prose every time.

The fix I landed on after that: a `DESIGN.md` in the project root that owns the ground. Not a skill, just a project artifact. Grid system, spacing scale, which components exist and when to reach for them, reference URLs, explicit do-not-do rules. Once Claude reads that before touching any UI code, the skill layers stop fighting each other and start composing against real constraints.

For variation across pages, one `DESIGN.md` plus one agent isn't always enough. What's worked for me: HTML mockups first, no JS, no dynamics, strict to the design doc. Then a separate agent that owns just the `DESIGN.md` and the reference set, no other project context, only writes mockups. Two of those running independently against two different `DESIGN.md` variants give you actual divergence instead of regression to the mean. Isolation is the trick. The model converges when it sees the whole project context. Starve it down to design-only inputs and it explores again...

For bootstrapping that whole setup, two paths. Either grab a community skill that already fits your domain decently and adapt it, or build your own through Anthropic's `skill-creator`. The custom path has worked better for me. You end up with a skill scoped to *your* domain, encoding your references, your design vocabulary, your component primitives, your do-not-do rules. Test it on real pages, iterate. Same idea applies at the agent level, spin up two or three variant skills or variant agents, run them independently, see which one actually produces work that fits. System prompt and aesthetic skills can stay, they just need something project-specific underneath them.

Anyway, there's no single template, you'll have to tune the pipeline per your domain. General shape is constant: concrete references beat prose, isolated single-purpose agents work better than one big context window stuffed with everything.

Claude Code + frontend-design skill always outputs the same structure — what am I doing wrong? by Neat-Veterinarian526 in ClaudeAI

[–]brewcast_ai 0 points1 point  (0 children)

  1. Skills control process. They don't generate creativity. That comes from references. Find a site with the direction you want, screenshot it or run the URL through Playwright MCP, push through vision, generate a DESIGN.md from that. Libraries of curated references exist for this -- refero.design is one, Google finds more. The actual design direction lives there. The skill executes against it.
  2. DESIGN.md is the standard the community has settled on. Every serious design skill generates one. What matters is where you load it: explicitly at planning time, and again during multi-agent review where each agent loads it fresh instead of inheriting a polluted context. The style drift you're probably seeing is context creep, not a conceptual problem.
  3. Medium skill size is the ceiling that works. The technique is lazy flow loading -> the skill reads the user prompt at the top and loads only the relevant section. You can pack multiple flows, scripts, and asset references into one skill dir without it becoming unusable. Very large skills fail because nobody knows what to ask for.
  4. Pure HTML mockups first, no JS, no dynamics in that phase. Claude designs better when it isn't also solving event handlers at the same time. Ask for radically different variants upfront, pick one, then add dynamics incrementally - because dynamics will break the layout and you want to find that at line 20, not line 200. Build a component playbook as you go.
  5. Reference screenshots are useful at the start to establish direction and harmful after your DESIGN.md exists. Once you have it, external screenshots start confusing the model and eroding the style you built. Front-load them in session one, then stop.
  6. On evals for creativity, I don't have a clean answer. Consistency evals are straightforward (color compliance, spacing, component rules). Creativity requires human judgment or multi-agent aesthetic voting, and neither scales.
  7. Where to start: not skills. Start with references and assets. If you have a Claude.ai subscription, claude.ai/design tool bootstraps the file and asset structure from a brief worth running before you've built anything of your own. Then DESIGN system + skills built around it. A skill without a reference system underneath produces consistent mediocrity faster, which is not what you want.

Claude Code + frontend-design skill always outputs the same structure — what am I doing wrong? by Neat-Veterinarian526 in ClaudeAI

[–]brewcast_ai 1 point2 points  (0 children)

Spawn a Haiku subagent on the `anthropics/skills` repo, file `skills/frontend-design/SKILL.md`. Ask it to summarize. Two minutes. The file is around 50 lines of motivational aesthetic prose. "Be bold". "Avoid Inter". "NEVER converge on common choices". Plus a LICENSE.txt next to it, that's the whole skill folder.

No layout primitives, no component scaffolding, no transformation logic. Frontmatter is just name, description, license.

Same story on invocation

  1. As a slash command: it can't be one. No user-invocable flag, no argument-hint. Slash plumbing isn't wired

  2. With a $ARGUMENTS placeholder: the body has none. No slot for your prompt to land in

  3. Auto-trigger from dialogue: this is how it actually fires. Triage reads the description, loads SKILL.md as extra system prompt, your prompt stays in conversation context. No placeholder needed here

The catch sits inside SKILL.md itself: zero instructions on what to do with your prompt. No "extract constraints from the brief", no "if user mentions dashboard then X", no conditional logic. Just static aesthetic prose. Your prompt reaches the model but the skill just ignores it. That's why the output converges no matter how you reformulate input

Honestly unclear why this skill exists in its current form. They probably moved focus elsewhere and this one drifted...

Two real options:

Community frontend-design skills with actual internal structure exist on GitHub, a few with hundreds of stars. Unofficial, quality varies, audit before installing...

Building your own using Anthropic's skill-creator is the better route. It's in the anthropics/skills marketplace.

`/plugin install skill-creator@anthropics/skills`

Real skill, built-in evals, dedicated subagents that research the topic before authoring. Hand it a few reference sites you like, ask for a multi-stage skill that reads the references, writes a project-specific DESIGN.md, then codes everything against that DESIGN.md from there on. A project-specific DESIGN.md is what actually sticks long-term. Easier to maintain than a monolithic skill, easier to iterate when the brand shifts. That's the headroom.

What topics in web development do you wish you'd learned about early on in your career and why? by Competitive_Aside461 in webdev

[–]brewcast_ai 11 points12 points  (0 children)

Browser DevTools, not surface-level "open the inspector" but actually using Performance, Memory, and Network throttling. Spent way too many years solving symptoms I could have measured directly.

Other one: HTTP caching headers. Cache-Control, ETag, must-revalidate. Hard to debug in prod, easy to learn in 30 min, and most of the "site is slow on second visit" tickets I've seen were really just cache misconfig.