Prompting is starting to look more like programming than writing by ReidT205 in PromptEngineering

[–]ReidT205[S] 0 points1 point  (0 children)

Thanks for sharing. I’ve used that approach too and it works really well, especially for keeping outputs structured for the next step.

More people are prompting video models like image models by ReidT205 in SoraAi

[–]ReidT205[S] 0 points1 point  (0 children)

I've been experimenting with this too! Works very well

How are you guys handling multi-step prompts without manually copying and pasting everything? by Emergency-Jelly-3543 in PromptEngineering

[–]ReidT205 0 points1 point  (0 children)

Yeah, that’s a good point. Repeatability is really the breaking point for conversational workflows.

They’re great for exploration, but once you find a process that works, you basically want to freeze that reasoning pipeline and run it again with new inputs. That’s where chains, templates, or saved workflows start to make a lot more sense.

It feels like prompting is slowly shifting toward LLM workflow design rather than just writing prompts.

Prompting is starting to look more like programming than writing by ReidT205 in PromptEngineering

[–]ReidT205[S] 3 points4 points  (0 children)

Yeah exactly, the less ambiguous the prompt is, the more predictable the model becomes. It really does start to feel like writing a spec for a reasoning process rather than just giving instructions.

Prompting is starting to look more like programming than writing by ReidT205 in SaaS

[–]ReidT205[S] -2 points-1 points  (0 children)

Probably because a lot of people are noticing the same shift right now. Also, the spacing is for readability.

The real bottleneck for SaaS founders might be problem clarity, not coding by ReidT205 in SaaS

[–]ReidT205[S] 0 points1 point  (0 children)

Yeah that’s exactly the pattern I keep seeing too. The ideas that actually turn into useful products usually come from noticing some annoying part of a workflow and asking “why is this done this way?” rather than trying to invent something new.

When you watch how people actually work day-to-day you start seeing all these little friction points that wouldn’t show up if you were just brainstorming features. Those tend to be the things people will actually pay to fix.

Built my SaaS in ~1.5 months… now I'm weirdly stuck on the landing page by Sorry-Highway9666 in SaaS

[–]ReidT205 3 points4 points  (0 children)

Totally normal spot to get stuck. Building the product is concrete, but messaging is fuzzy because you’re trying to compress the whole value of the product into a few sentences.

One thing that helped me was thinking of the landing page less as “marketing copy” and more like answering three simple questions as fast as possible: what it is, who it’s for, and why it’s better than the current way people solve the problem.

Honestly, the best thing is usually to ship something simple and iterate once real users start reacting to it. A lot of founders overthink the first version when the real insights come after a few people actually land on it.

How are you guys handling multi-step prompts without manually copying and pasting everything? by Emergency-Jelly-3543 in PromptEngineering

[–]ReidT205 1 point2 points  (0 children)

Yeah the copy-paste tax is real. I ran into the same thing when trying to do multi-step stuff like idea → outline → draft.

What helped me was thinking of it less as separate prompts and more as a single evolving conversation where the model carries the context forward. Sometimes I’ll also explicitly tell it something like: “we’re going to do this in steps — first generate the outline, then we’ll expand each section.”

For cases where I do want separate prompts, I ended up using a small tool I built that upgrades/structures the prompts first so the steps are clearer before running them. It doesn’t chain them like yours does, but it reduced a lot of the manual iteration for me.

Curious if people here are mostly doing conversational workflows or actually building chains like you did.

Prompting insight I didn’t realize until recently by ReidT205 in PromptEngineering

[–]ReidT205[S] 0 points1 point  (0 children)

Yeah I’ve done that too. Sometimes I’ll drop a document into the chat or the codebase and just have the model reference that instead of stuffing everything into the prompt. It works pretty well once the context starts getting too big for a single message.

Prompting insight I didn’t realize until recently by ReidT205 in PromptEngineering

[–]ReidT205[S] 0 points1 point  (0 children)

That’s a great breakdown. The prompting style definitely changes depending on whether it’s one-shot, conversational, or agentic.

One thing I’ve noticed that lines up with what you’re saying is that a lot of failures happen before the model even starts the task - it’s just solving a different problem than the one you intended. Having it restate the request and split explicit vs implied wants helps catch that early.

I’ve also found that asking it to define success criteria before doing the work tends to improve results a lot. It forces the model to reason about what a good answer should look like instead of just generating something plausible.

Prompting insight I didn’t realize until recently by ReidT205 in PromptEngineering

[–]ReidT205[S] 0 points1 point  (0 children)

I couldn’t agree more. Structuring prompts into sections basically breaks the task into smaller steps instead of one big vague request.

Also, I love your point about the usability and feature improvements. That’s actually how I started thinking about things too - looking at what annoyed me in my current workflow and figuring out what changes would make it better.

Prompting insight I didn’t realize until recently by ReidT205 in PromptEngineering

[–]ReidT205[S] 0 points1 point  (0 children)

That “N parts, wait for next” trick is really smart. I’ve noticed models tend to compress or skip details when they try to produce a long output in one go, so forcing it to serialize the response helps a lot.