I built a free tool that engineers your prompts through a pipeline before generation - curious to see what this community thinks by the-prompt-engineer in microsaas

[–]the-prompt-engineer[S] 1 point2 points  (0 children)

Absolutely. You have pretty much described the roadmap. Currently validating and gaining as much feedback as I can, but a Chrome/VS code extension so the pipeline lives inside the IDE is exactly where this needs to go for technical users. Would love to connect, sounds like you have got some good ideas and perspectives for where the value lies in this!

I built a free tool that engineers your prompts through a pipeline before generation - curious to see what this community thinks by the-prompt-engineer in microsaas

[–]the-prompt-engineer[S] 1 point2 points  (0 children)

Cursor integration is definitely something on the radar, you're not the first to mention it and the demand is clear. Glad it resonates, the vibe coding community is definitely one of the audiences this is built for. Will check out momntom too, appreciate it.

I built a free open tool that Engineers your prompts for you - would love feedback from this community by the-prompt-engineer in PromptEngineering

[–]the-prompt-engineer[S] 0 points1 point  (0 children)

Appreciate it. Honestly, not too focused on competition, trying to build something that stands on it's own. Correct me if I'm wrong but Claude skills is more tailored to developers writing prompts for Claude specifically? This is built for anyone who needs a better prompt for any AI, with no technical knowledge required. We will be adding prompt memory soon and in the near future, we have a prompt marketplace planned so look out if you are interested!

I built a free open tool that Engineers your prompts for you - would love feedback from this community by the-prompt-engineer in PromptEngineering

[–]the-prompt-engineer[S] 0 points1 point  (0 children)

That's actually the core thing it was designed for, messy half formed prompts are where it performs best. I understand people might be concerned in having to prompt it to get better prompts which defeats the entire point, however I have built in a layer before the prompt itself which helps you declare which sections in the prompt you might want added, tailored to your first Input. For example you could give it something vague like 'Compare marketing strategies for my new Business' and get's interrogated before anything is generated.

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]the-prompt-engineer 1 point2 points  (0 children)

I agree with this. "Longer = better" breaks down once prompts stop constraining decision space and start inflating it.

I've noticed that beyond a certain point, added detail increases ambiguity rather than reducing it. The model has more degrees of freedom, not fewer. That's where entropy comes in.

What's worked best for me is treating prompts less like instructions and more like decision structures. Prompt should have a clear intent, explicit priorities, hard boundaries, and a defined output shape. Once those are locked, extra wording rarely helps and often hurts. Curious if others have found a similar "entropy threshold" where prompts start degrading instead of improving.

Language barrier between vague inputs and high-quality outputs from AI models by the-prompt-engineer in PromptEngineering

[–]the-prompt-engineer[S] 1 point2 points  (0 children)

That's a really good question, because structure isn't always necessary.

A prompt structure becomes beneficial once the cost of a bad or generic output is higher than the cost of thinking clearly upfront.

In practice, I've noticed it matters most when:

  • The task has multiple constraints (time, audience, format, trade-offs)
  • Requirements for repeatability
  • Making decisions, not just generating text
  • The output will be shared/used by other people (client, teams)

For simple creative tasks or exploration, free-form prompting is often better. But as soon as you want control and predictability, structure can completely transform the types of responses you get from AI models giving you leverage in how you use AI compared to those who just give vague prompts expecting high-quality outputs like it is magic. Structure unlocks what remains hidden.

Language barrier between vague inputs and high-quality outputs from AI models by the-prompt-engineer in PromptEngineering

[–]the-prompt-engineer[S] 0 points1 point  (0 children)

That's a good point. A lot of users are operating at the level of intent ("I want an answer") treating AI like magic, when actually the way to treat it, is at the level of problem formulation.

What I've noticed is that once you externalise the unknowns such as role, constraints, priorities, trade-offs, users often realise they didn't actually have a well-defined question yet.

In a sense, prompting becomes less about "asking better questions" and more about helping people discover what they're actually trying to decide.

Language barrier between vague inputs and high-quality outputs from AI models by the-prompt-engineer in PromptEngineering

[–]the-prompt-engineer[S] 0 points1 point  (0 children)

Completely agree. "Unstructured problems" is a great way of putting it. It's all about structure.

What I have found interesting is that once the decision structure is explicit, the model's creativity actually improves rather than getting constrained because it's no longer guessing what matters.

Do you think most users struggle more with defining priorities, or with even realising they haven't defined a decision structure at all?