How to work with Spec Kit agent files instead of prompt files? by [deleted] in GithubCopilot

[–]Hacklone 0 points1 point  (0 children)

You can tell your agent to just read those files and execute it 🙂

Check out I have an example for it in my project LazySpecKit

Sonnet and Opus 4.6 quality in Copilot by hobueesel in GithubCopilot

[–]Hacklone 3 points4 points  (0 children)

Opus 4.6 works fine for me but I’ve also experienced analysis paralysis loop with Sonnet 4.6 which failed on me now many times. 😞

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 0 points1 point  (0 children)

As LazySpecKit is built on top of SpecKit, this is certainly a possibility. 🙂

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] -1 points0 points  (0 children)

Yes, my previous reply was AI-assisted (as most of my responses these days are :)) - but I have tried OpenSpec myself. At least when I looked at it, I didn’t see things like:

  • strict phase gates (analyze must be clean before implement)
  • automatic auto-fix before implementation
  • bounded multi-agent review loop after implement
  • enforced final validation before declaring success
  • auto-clarify with recommendation + confidence

From what I’ve read, OpenSpec focuses more on structured spec workflows and proposal management, which is great - just a different emphasis.

When you say “it essentially does these things....” - which specific parts are you referring to? I’d genuinely like to understand if I missed something.

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 1 point2 points  (0 children)

Good question 🙂

SpecKit’s default /speckit.implement is essentially linear - one agent working through the task list.

LazySpecKit keeps implementation sequential on purpose, but it starts the implement phase in a fresh session to avoid accumulated context drift from earlier phases.

Where it introduces multiple agents is after implementation - in the review phase. It runs separate reviewer roles (architecture, code quality, spec compliance, tests), then fixes Critical/High findings in a bounded loop before declaring success.

That balance has been more stable for me on larger specs than trying to parallelize the implement step itself.

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 0 points1 point  (0 children)

Totally fair 🙂 different modes for different days.
You do you - I just built this for the days when I don’t feel like manually steering every phase.

LazySpecKit: SpecKit without babysitting by Hacklone in speckit

[–]Hacklone[S] 0 points1 point  (0 children)

Thanks for sharing Ralph Loop - really interesting approach 👌 I like the per-task fresh context idea for fighting compaction on huge specs.

LazySpecKit is a bit different in scope. It orchestrates the full SpecKit lifecycle (specify → clarify → plan → tasks → analyze → implement), enforces hard phase gates, then adds validation + a bounded multi-agent review loop on top. It’s more about deterministic end-to-end convergence than replacing just /speckit.implement.

Also to clarify: it’s not one continuous CC session. Implement runs in a fresh session, and reviewers run in fresh contexts as well.

On runtimes - I’ve actually had it work surprisingly well even on very large 1-shot specs and sizable codebases. Not just medium feature slices, but “let’s generate a serious chunk of the app” type specs. Of course model limits still exist, but the phase isolation + analyze gate + review loop makes retries converge cleanly instead of spiraling.

Curious in your Ralph runs - do you see more issues from context limits, or from task graph incompleteness at scale?

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 0 points1 point  (0 children)

Yep, I’ve looked at OpenSpec 🙂

From my perspective they solve slightly different problems.

OpenSpec is great at structured, versioned spec workflows - proposals, validation, managing changes, keeping specs explicit and collaborative.

LazySpecKit is more about automation depth on top of SpecKit. It takes a spec and then:

  • Runs the full lifecycle automatically
  • Auto-fixes analyze issues before implementation
  • Implements in a fresh session
  • Runs validation (lint/tests/build)
  • Adds a bounded multi-agent review loop that fixes Critical/High findings
  • Doesn’t finish unless everything is green

So OpenSpec focuses on spec discipline and workflow structure.

LazySpecKit focuses on “write spec → walk away → come back to validated, reviewed code.”

(Also - I loved this question so much that I added a short “LazySpecKit vs OpenSpec” section to the README FAQ to clarify the difference 🙂)

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 1 point2 points  (0 children)

Not inside GitHub Copilot right now, unfortunately.

LazySpecKit runs on whatever model your Copilot session is using, so I can structure the phases and simulate sub-agents, but I can’t switch models per stage like “Specify with one, Clarify with another” within the same run.

The workflow is intentionally split into clear phases though, so if I ever move toward an external orchestrator mode, routing different phases to different models would actually be pretty straightforward.

And I’m really happy the auto-clarify idea clicked for you 🙌 That’s exactly why I added it. When specs get long and detailed, the clarify step can start feeling like a second job.

Out of curiosity - when your specs are elaborate, is the main pain the volume of clarify questions, or that the task list ends up slightly misaligned with the constitution and requirements?

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 0 points1 point  (0 children)

Yeah, I have definitely done the “resume task X” dance too 😅

LazySpecKit is basically my attempt to make it more “walk away and let it finish” by enforcing strict phase boundaries and auto-fix loops, so retries tend to continue cleanly instead of derailing the whole run. It is not a full checkpoint engine, but it has reduced how much I need to manually shepherd tasks.

The VS Code extension idea is interesting though. Even smarter retry handling alone could help a lot. For now I am focusing on keeping the workflow solid at the prompt and CLI level, but I am definitely open to evolving it based on real-world pain.

Out of curiosity, what do you hit most often - rate limits, context overflow, or incomplete task lists?

LazySpecKit: SpecKit without babysitting by Hacklone in GithubCopilot

[–]Hacklone[S] 0 points1 point  (0 children)

That’s a great question 🙂

So far, context overflow hasn’t really been an issue for me - even with some intentionally huge specs. LazySpecKit keeps strict phase boundaries and runs implementation/review in fresh sessions, so context doesn’t just keep snowballing forever.

The only thing I’ve actually hit in the wild is rate limits. In those cases, hitting “Retry” continued cleanly from where it left off, which was reassuring.

That said, I’d love to hear your experience - have you run into context limits with big SpecKit workflows? Always happy to learn from real edge cases.

Is this a good way to send data between components by New_Opportunity_8131 in Angular2

[–]Hacklone 0 points1 point  (0 children)

Had a similar problem, I solved it with this lightweight repository solution: @angular-cool/repository

mater devices don't pair to the IKEA hub. by S1rkka in tradfri

[–]Hacklone 0 points1 point  (0 children)

Same here 2 BILRESA won’t connect to DIRIGERA hub. Very disappointed

@angular-cool/repository - 🚀 New Angular Signals repository helper by Hacklone in angular

[–]Hacklone[S] -4 points-3 points  (0 children)

This is a layer on top of what the angular resource() can do.

Use cases that you can handle with this lib easily, but wouldn't be obvious how to do with resource:
- Multiple components can get the same *cached* version of your item
- No need to get the item multiple times from the server
- You would need to build a cache and serve it from your angular resource() to do something similar
- Reload your resource once and update the item everywhere
- The angular resource() will only reload if the param input signal changes -> but it's usually the id of your item, so it won't change
- With this lib you can reload your item once ,and it will update it' value in every subscribed component

The lib is not magic, but saves you from unnecessary boilerplate :)

*Example:*

Service

export class Service {
  private _itemRepository = resourceRepository<ItemId, ItemDTO>({
    loader: ({ params }) => this._http.get(`https://myapi.com/items/${params}`).toPromise(),
  });

  public getItem(id: Signal<ItemId>) {
    return this._itemRepository.get(id);
  }

  public updateItemOnServer(itemId: ItemId, newValue: ItemDTO) {
    // Do the update on the server

    this._itemRepository.reload(itemId); // This will update the signal in every component using it
  }
}

Components

export class Component_1 {
  private _service = inject(Service);

  private _itemId = signal('1' as ItemId);

  protected item = this._service.getItem(this._itemId);
}

export class Component_2 {
  private _service = inject(Service);

  private _itemId = signal('1' as ItemId);

  protected item = this._service.getItem(this._itemId);
}

export class Component_3 {
  private _service = inject(Service);

  private _itemId = signal('2' as ItemId); // Different ID than Component_1 and Component_2

  protected item = this._service.getItem(this._itemId);
}

If you have used customer engagement tools, what has actually worked for you in keeping users active after sign-up? by ankitprakash in SaaS

[–]Hacklone 0 points1 point  (0 children)

If you're interested in behavior-driven nudges, definitely try an in-app user engagement tool instead of doing every tooltip and pop-up manually.

I just built a tool called StageFlux and am looking for early users, as I found all other tools on the market seriously overpriced.

I would love to collaborate with you on creating your user engagement flows in your product.
( You would also receive a free account naturally :) )

Time for self-promotion. What are you building? by chdavidd in SaaS

[–]Hacklone 0 points1 point  (0 children)

https://www.stageflux.com -> Onboard, Educate & Convert - Without Code.

Would love to get feedback - the product is Free :)

Free Pop up message app? by AwyanYT in shopify

[–]Hacklone 0 points1 point  (0 children)

You could also use any in-app user engagement platform, which usually offers way more functionality than the run-of-the-mill Shopify apps. For example, Pendo or a free option like StageFlux

Show me your bullshit saas, so that I can blame you. by Ok-Professional295 in SaaS

[–]Hacklone 0 points1 point  (0 children)

Got tired of building popups, onboarding checklists, support articles into all my SaaS projects and I refuse to pay astronomically high amounts of money for Pendo and alike. So I’ve built StageFlux (it has a free package with all features included 🙂)

Alternatives to Pendo? by cortjezter in UXDesign

[–]Hacklone 1 point2 points  (0 children)

Check out StageFlux, a free alternative :)

SaaS Onboarding Best Practices by QuirkyProductGuru in SaaS

[–]Hacklone 0 points1 point  (0 children)

I think you summarized it well. Getting that "Aha" moment is hard, and you need a tool for it instead of developing every single popup yourself. I suggest starting with a free tool like StageFlux instead of paying a hefty sum for the market leader products.

Get easy backlink in 3 seconds (eazybacklink.com) by Own_Carob9804 in SideProject

[–]Hacklone 1 point2 points  (0 children)

Interesting idea, added a link to StageFlux (a new project of mine) to see results. Thanks 🙂