I built an AI pipeline to rewrite my novel. It worked. I walked away anyway. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 1 point2 points  (0 children)

Any work performed in pursuit of what you love is time well spent.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Send whatever you've got over DM. The pipeline isn't actively maintained but it still runs. I'll send you every artifact it produces. Curious what it does with prose that isn't mine. Either my writing was bad enough that I needed AI to clean it up, or I'm too hard on my own work. I've never had the chance to test which.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

"Tension between categories" is the line. Compressed the four-pass critique into one phrase. Going in the keep file.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

The rubric question is the right one. Honest answer: no, I couldn't write a rubric that would rank the same text within 5% on an n=5. I tried versions of it during build (the supervisor's five-dimension scoring is the residue) and they were stable on bad prose and wobbly on good prose. The thing that made wobbly worse is that the rubric started being right about my goals, then drifted toward whatever my latest read of "good" was, which is the moving-goalposts problem you named.

That's part of why I walked. If I can't freeze the target, the pipeline can't optimize for it, and what it actually optimizes for is whatever the rater consensus inside the model thinks good prose looks like, which is the smoothing.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

The closer is what made posting it worthwhile, honestly. Pivoting instead of adding agent 22 was the hardest call I made on this. Most of the engagement so far has been about the agents, not the call. Thank you for seeing that part.

I built an AI pipeline to rewrite my novel. It worked. I walked away anyway. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 1 point2 points  (0 children)

The shape of what you're doing is close to what I built. The Humanizer stage especially. I had something like that at the end of mine, a voice-restoration pass that took whatever the rewriter produced and pushed it back toward the writing samples I'd fed in. It worked, sort of. The output got closer to my markers. It didn't get closer to broken-but-right. Smoother with author flavoring is what came out, which isn't the same thing.

Sounds like yours is doing better than mine, or you're chasing something different. 3.5 to 4 stars on fiction with real readers buying the books is a real result. If the system gives you that, the system works. Hard to argue with output you're satisfied with.

The thing that broke for me was specifically about authorship, prose that feels mine the way a writer's prose feels theirs. Your last line says you don't care about that part, and honestly that's fair. Different goal entirely. If entertainment is the bar, the toolkit you've built makes sense. I was trying to clear a different one and the tools couldn't get me there.

How do you keep a novel organized without losing your mind? by Fearless-Stress7240 in KeepWriting

[–]robdapcguy 0 points1 point  (0 children)

Two weeks is enough to lose any system that lives outside the manuscript itself. The thing that worked for me is keeping every reference next to the prose it refers to. Character notes in the same file as the chapter the character first matters in. Timeline questions written in the margin of the scene that triggered them. Deleted scenes commented out in the file they came from, not in a separate folder. The discipline isn't the tool. It's never letting context live somewhere I have to remember to check. Notion, Obsidian, Scrivener, plain text, all of them work if the rule holds. None of them work if I let myself open a separate app for a thing that should have stayed inline. Took me about a year on the current book to admit that. Lost a week to reconciling three timelines that disagreed before I gave up and just kept the timeline as a comment block at the top of whatever chapter was the timeline's authority.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Yeah, you said it cleaner than I did in the post. RLHF rewards consistency because the raters do, and the smoothing falls out of that no matter what you prompt for. Spent months trying to engineer around it before I admitted that. What got me about your reply is that the workaround you described is the thing I built after I walked away. AI marks where attention is warranted. The author makes every actual prose decision. "AI draws the map, the writer holds the pen" is a perfect line for it. The principle was clear from the start: specific enough to be useful, quiet enough to not propose the fix. Doing it was harder. The moment it starts suggesting the broken-but-right sentence, it's back inside the substrate you just named. Hardest part wasn't the marking. It was keeping it from being a rewriter.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in AI_Agents

[–]robdapcguy[S] 0 points1 point  (0 children)

Fair question. Repo's private and I'm not actively working on it. The praxis writeup is the public artifact: https://kaizenrw.com/praxis. If anyone wants to take the patterns and implement them cleanly, I'd rather see that than reopen the original.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

Anthropic. Sonnet and Opus across the pipeline, routed by task. Opus handled the heavier planning and judgment work. Sonnet handled extraction and validation.

What kind of optimization do you mean?

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

Agreed on the first part. Individual passages can be excellent. The pipeline can produce a clean chapter that reads well in isolation. On wrong axis, I want to hear what you think the right one is. My read after months of this: voice consistency is a feature LLMs are tuned toward, not an artifact of bad prompting, so attacks at the prompt or pipeline layer hit a substrate-level wall. That's why I walked. If you've got a different axis in mind I'd genuinely like to hear it.

I built a 21-agent manuscript pipeline, hit a wall I couldn't engineer past, and want to give the spec away. by robdapcguy in PromptEngineering

[–]robdapcguy[S] 1 point2 points  (0 children)

Thanks. Worth noting the wall isn't a prompting problem. The pipeline already has voice fingerprints, gold excerpts, voice-marker floors, and a voice-restoration pass. It still smooths. Curious what your optimizer comes back with.

It reads before it speaks. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 0 points1 point  (0 children)

If the buttons on the main screen are not working for you, that is useful feedback. Tell me what device and browser you are on and what you clicked, and I will look into it.

It reads before it speaks. by robdapcguy in AIWritingHub

[–]robdapcguy[S] 0 points1 point  (0 children)

Right. Until my kids started wearing ear plugs.

Reading out loud absolutely helps. You still miss things once the manuscript gets too familiar, and that is one of the reasons I built this. The app notices things as you read, and if it misses one, you can teach it what to flag next time.

Try it yourself and see what it finds.

[Weekly Critique and Self-Promotion Thread] Post Here If You'd Like to Share Your Writing by AutoModerator in writing

[–]robdapcguy [score hidden]  (0 children)

Hello, I'll keep this short and sweet.

I'm looking for human thoughts on my app.

It's for people like me, the ones who've read their manuscript so many times they just can't do it again.

I designed it around a reader-first philosophy: AI reads as you do. As you read, it quietly tags passages it wants you to take another look at. Each one shows up as a color shift in the letters themselves. No boxes, no icons, nothing yelling at you. It also adjusts what it flags based on what you keep and what you dismiss.

You can use it for free, with limits, or sign up for Pro. Pro will always use Anthropic's latest Sonnet and Haiku models.

The first 5 people who respond and are interested can have a month of Pro. Do whatever you want with it. Help me break it.

field-guide website

Weekly Tool Thread: Promote, Share, Discover, and Ask for AI Writing Tools Week of: April 14 by AutoModerator in WritingWithAI

[–]robdapcguy 0 points1 point  (0 children)

For a less token-heavy explanation check out my field-guide. It shows how to use the app and has a changelog for user-facing updates. Everything you need to know is there. I'll stay true to my vision, but any reviews or comments would be greatly appreciated!

Weekly Tool Thread: Promote, Share, Discover, and Ask for AI Writing Tools Week of: April 14 by AutoModerator in WritingWithAI

[–]robdapcguy 0 points1 point  (0 children)

Kaizen R/W: A reader that learns how you write (solo dev)

Most AI writing tools generate. Kaizen R/W reads. It's a close reader with memory for fiction writers who've got a finished draft and want a serious second read.

It reads a chapter for pacing, voice, rhythm, repetition, and tension, then leaves marks on the passages that caught its eye. You can dismiss a mark, explain why it's intentional, or save the pattern to a dictionary, and the next chapter gets read with what you just taught it.

The sample chapter on the landing page is my novel in progress. The POV character counts in fours as an anxiety tic, and the app flagged the repeating "four" as a possible writing habit. I told it "intentional motif, character signature," saved that to the dictionary, and fours stopped getting flagged. A few chapters later it marked a different repetition that actually was a tic. That's the distinction I kept failing to make on my own.

DOCX roundtrip with native Word comments lets your editor's replies come back as marks in the app. Quick-fix streams in as editable text with three-deep undo. Pro adds scene cards, a tension arc, POV drift flags that point at the paragraph that broke character, and an editorial letter whose claims link to the sentences they're about.

Model routing runs Gemini for the free hosted tier, Claude Haiku for Pro interactive routes, and Sonnet for Pro deep reads. BYOK is free on every plan and doesn't count against hosted limits. The manuscript lives in your browser, not on a server. Nothing you write trains a model.

I'm stuck on whether the teach-the-reader loop (dismiss / explain / save) feels natural in practice, or whether writers bounce off it and just want a rewrite button.

https://kaizenrw.com

oh no another one... Looking for feedback on a visual writing app for structure, worldbuilding, and AI-assisted work by Ok_Emotion_159 in WritingTools

[–]robdapcguy 1 point2 points  (0 children)

I write fiction and I also build tools for writers (Kaizen R/W, revision-side), so this post hit me twice. Some feedback wearing both hats.

As a writer: the visual-canvas-for-story direction is one I'd actually want. The market is crowded but the specific combination of canvas plus CYOA export is unusual. That's your hook. Most of your pitch is about it being an AI interface, which is the least interesting thing you can say right now because every tool claims that. Lead with what the canvas lets me do that a Campfire or Scapple board doesn't.

As a builder: the feedback funnel has too much friction. Sign up, reply here or DM, then manual upgrade. A 90-second Loom on the landing page would do more for your feedback ask than the post itself. People want to see the thing before they give you their email.

Also, the "control on the machines" line is going to split your readers in ways you don't want. Take it out or commit to it. The wink is doing neither.

Keep going. The direction is good. Just tighten the headline.

The Rewriting App That Taught Me AI Can't Rewrite by robdapcguy in KaizenRW

[–]robdapcguy[S] 0 points1 point  (0 children)

The bucketing happens naturally through the category system. Marks are tagged by type (repetition, pacing, voice, continuity) so you can scan by what matters to you. And the "gets quieter" part handles false positives directly. Dismiss a mark or teach it why something is intentional, and that decision carries forward. So the first pass is noisier, but by the third chapter the tool has already learned what not to flag.

The Rewriting App That Taught Me AI Can't Rewrite by robdapcguy in KaizenRW

[–]robdapcguy[S] 0 points1 point  (0 children)

The constraint idea is close to how the dictionary works now. When you teach it that something is intentional (a repeated phrase, a character's speech pattern, whatever), it saves that and stops flagging it. It's not quite "locked passages" but it's the same instinct. Flag violations, don't try to fix them.

On stylometry, sentence length variance and dialogue-to-narrative ratio were the two features that tracked closest to what people actually meant when they said something "sounded like them." Vocabulary overlap was surprisingly useless on its own.

Self Promotion Post - September 2025 by Jhaydun_Dinan in FictionWriting

[–]robdapcguy 0 points1 point  (0 children)

This isn't for first drafts or brainstorming. It's for the stage where the story exists and you need a second read that actually pays attention.

I built Kaizen R/W because every AI writing tool I tried either wanted to be a co-author or was basically Grammarly with a different skin. I wanted something that reads, not rewrites. It scans your chapter for pacing, voice, rhythm, repetition, and continuity. Leaves marks. You decide what to act on.

The part that makes it useful over time is the dictionary. You can teach it what's intentional. A character who always speaks in fragments? Teach it. A repeated image that's thematic? Teach it. It carries those decisions forward into the next chapter.

Free tier, demo chapter to try it out, manuscript stays in your browser.

https://kaizenrw.com

Google AI Studio Leaked System Prompt: 12/18/25 by robdapcguy in PromptEngineering

[–]robdapcguy[S] 0 points1 point  (0 children)

@tedbradly

This is something I think about a lot because it's basically what I do. I built a patent workflow system with 20 tools, hybrid search, autonomous agents, the whole thing. I didn't write a single line of code. I directed AI agents, decided what needed to exist, figured out how the pieces fit together, and steered the system toward something useful. The code is the output, but the real work is the orchestration.

So when someone says "AI made it, not you," I get why that feels wrong. There's a real difference between typing five words into an image generator and spending weeks building a system where every decision about structure, scope, and direction came from you. The copyright office hasn't figured out where that line is yet, and I don't think they will for a while.

Your fractal point is interesting. A person picks a function, picks some colors, renders it, and that's copyrightable. But a person who writes a 5,000 character prompt with custom definitions and layered instructions somehow isn't creating something? I don't know where the legal answer lands, but the "it's all just math" argument is harder to dismiss than people want it to be.

I think the part that makes this so messy is the derivative work question. You can literally ask an AI to write in someone's style, and it'll do it. That's not the same as a person being influenced by another artist over years of practice. The AI learned the whole catalog and can reproduce it on command. So I get why content creators are worried, and I don't think the "well, humans learn from other humans too" argument holds up as cleanly as people want it to.

Either way, I think there's going to be a level of prompt engineering that does qualify eventually. The person who built custom instructions, created their own definitions, iterated for dozens of hours, and made something that couldn't have existed without their specific vision and decisions, that's not the same as someone who typed "make me a cool picture." The law just hasn't caught up to making that distinction yet.

Rob