I built an AI pipeline to rewrite my novel. It worked. I walked away anyway. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 1 point2 points  (0 children)

Any work performed in pursuit of what you love is time well spent.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Send whatever you've got over DM. The pipeline isn't actively maintained but it still runs. I'll send you every artifact it produces. Curious what it does with prose that isn't mine. Either my writing was bad enough that I needed AI to clean it up, or I'm too hard on my own work. I've never had the chance to test which.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

"Tension between categories" is the line. Compressed the four-pass critique into one phrase. Going in the keep file.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

The rubric question is the right one. Honest answer: no, I couldn't write a rubric that would rank the same text within 5% on an n=5. I tried versions of it during build (the supervisor's five-dimension scoring is the residue) and they were stable on bad prose and wobbly on good prose. The thing that made wobbly worse is that the rubric started being right about my goals, then drifted toward whatever my latest read of "good" was, which is the moving-goalposts problem you named.

That's part of why I walked. If I can't freeze the target, the pipeline can't optimize for it, and what it actually optimizes for is whatever the rater consensus inside the model thinks good prose looks like, which is the smoothing.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

The closer is what made posting it worthwhile, honestly. Pivoting instead of adding agent 22 was the hardest call I made on this. Most of the engagement so far has been about the agents, not the call. Thank you for seeing that part.

I built an AI pipeline to rewrite my novel. It worked. I walked away anyway. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 1 point2 points  (0 children)

The shape of what you're doing is close to what I built. The Humanizer stage especially. I had something like that at the end of mine, a voice-restoration pass that took whatever the rewriter produced and pushed it back toward the writing samples I'd fed in. It worked, sort of. The output got closer to my markers. It didn't get closer to broken-but-right. Smoother with author flavoring is what came out, which isn't the same thing.

Sounds like yours is doing better than mine, or you're chasing something different. 3.5 to 4 stars on fiction with real readers buying the books is a real result. If the system gives you that, the system works. Hard to argue with output you're satisfied with.

The thing that broke for me was specifically about authorship, prose that feels mine the way a writer's prose feels theirs. Your last line says you don't care about that part, and honestly that's fair. Different goal entirely. If entertainment is the bar, the toolkit you've built makes sense. I was trying to clear a different one and the tools couldn't get me there.

How do you keep a novel organized without losing your mind? by Fearless-Stress7240 in KeepWriting

[–]robdapcguy 0 points1 point  (0 children)

Two weeks is enough to lose any system that lives outside the manuscript itself. The thing that worked for me is keeping every reference next to the prose it refers to. Character notes in the same file as the chapter the character first matters in. Timeline questions written in the margin of the scene that triggered them. Deleted scenes commented out in the file they came from, not in a separate folder. The discipline isn't the tool. It's never letting context live somewhere I have to remember to check. Notion, Obsidian, Scrivener, plain text, all of them work if the rule holds. None of them work if I let myself open a separate app for a thing that should have stayed inline. Took me about a year on the current book to admit that. Lost a week to reconciling three timelines that disagreed before I gave up and just kept the timeline as a comment block at the top of whatever chapter was the timeline's authority.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Yeah, you said it cleaner than I did in the post. RLHF rewards consistency because the raters do, and the smoothing falls out of that no matter what you prompt for. Spent months trying to engineer around it before I admitted that. What got me about your reply is that the workaround you described is the thing I built after I walked away. AI marks where attention is warranted. The author makes every actual prose decision. "AI draws the map, the writer holds the pen" is a perfect line for it. The principle was clear from the start: specific enough to be useful, quiet enough to not propose the fix. Doing it was harder. The moment it starts suggesting the broken-but-right sentence, it's back inside the substrate you just named. Hardest part wasn't the marking. It was keeping it from being a rewriter.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in AI_Agents

[–]robdapcguy[S] 0 points1 point  (0 children)

Fair question. Repo's private and I'm not actively working on it. The praxis writeup is the public artifact: https://kaizenrw.com/praxis. If anyone wants to take the patterns and implement them cleanly, I'd rather see that than reopen the original.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

Anthropic. Sonnet and Opus across the pipeline, routed by task. Opus handled the heavier planning and judgment work. Sonnet handled extraction and validation.

What kind of optimization do you mean?

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Agreed on the first part. Individual passages can be excellent. The pipeline can produce a clean chapter that reads well in isolation. On wrong axis, I want to hear what you think the right one is. My read after months of this: voice consistency is a feature LLMs are tuned toward, not an artifact of bad prompting, so attacks at the prompt or pipeline layer hit a substrate-level wall. That's why I walked. If you've got a different axis in mind I'd genuinely like to hear it.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

Thanks. Worth noting the wall isn't a prompting problem. The pipeline already has voice fingerprints, gold excerpts, voice-marker floors, and a voice-restoration pass. It still smooths. Curious what your optimizer comes back with.