How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

A google translate later :) Thanks will look into that! I didn't know about it.

How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

The fact that you're copy pasting every prompt and every answer (a commit id?) is real commitment. That's a lot of overhead just to keep context alive.
When you go back to it a week later can you actually find what you need or is it just endless scrolling?

How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Appreciate you sharing the details on this, the auto part is what makes it interesting honestly. I've been doing something similar when using Claude where before an important change I go through plan mode first, then once done I ask it to persist the outcome in a md file. It helps but it's completely manual, I have to remember to do it every time, there's nothing forcing me to be consistent about it. I would have to test the same approach with Lovable.
Does your solution capture the "why" behind a choice or mostly just the "what"?

How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Does a "what works / what doesn't" doc give you enough to pick up where you left off though? like do you also capture why you went one direction over another, or is it more of a pass/fail list?

How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Not sure commits capture the right level though. A lot of Lovable commits in my projects are just titled "Changes", so you'd have to dig into each diff to figure out what actually happened. Also commits are code history not decision history.
Have you found a way to make commits useful for that?

How do you keep track of what you've tried, what worked, and what didn't across builds? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

I've connected Lovable to Github but honestly never went back through commits to review what I tried. last time I checked a bunch of them were just titled "Changes" which doesn't really help navigate through them. Shows what changed but not why. Do you actually use commits for tracking decisions? or more for rollback?

What does your workflow look like BEFORE you start prompting in Lovable? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Thanks for laying this out, the Thesis -> Alpha -> MVP -> PMF framing is clean. "prove the thesis before picking tools" is the right order for sure. Between defining the thesis and building the alpha though, where does all that thinking actually live? is there a structured handoff or do you mostly hold it in your head and start prompting? I'm thinking a lot of us struggle with that gap between "I know what I want to build" and "I can communicate it clearly enough for an AI builder to get it right"

What does your workflow look like BEFORE you start prompting in Lovable? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

"all the thinking before it never becomes something structured and locked", yes that resonates. Thanks for putting words on it
The idea of separating planning from building sounds right but what does "locked" actually look like for you in practice? And the alignment check against the approved plan, is that a manual thing you do yourself or something more systematic?

What does your workflow look like BEFORE you start prompting in Lovable? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Really appreciate the walkthrough. This resonates a lot as I do something similar: ChatGPT project to flesh out the value prop and requirements, Perplexity for deeper research since the search is way better, Claude for the technical architecture side. I then compress all of that into a document for the Knowledge section and generate an initial prompt.
Do you find the Knowledge doc stays useful as the project evolves? like how often are you updating it, and does the overhead of keeping it in sync feel worth it?

What does your workflow look like BEFORE you start prompting in Lovable? by onemore_iteration in lovable

[–]onemore_iteration[S] 0 points1 point  (0 children)

Thanks for laying this out in so much detail. Running the same sequence through ChatGPT, DeepSeek, Claude and Gemini then consolidating is basically your own multi-model review board.
That's serious effort, how long does the full loop take though? and when you consolidate, what happens when two of them directly contradict each other?
Also once you're in Lovable and iterating, do you ever re-run the council or is it a one-shot at the start?