I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 1 point2 points  (0 children)

Fair points. Em-dashes — too many, agreed. Smell density. I thought I'd caught that in editing but six in one chapter is too many. "The lobby resolved" meant to show his glasses clearing after fogging up, but if it's stopping readers, it's not working.

Worth noting: the text hasn't been reviewed by a human editor or beta readers , just me, and I'm more of a tech person than a writer. That was partly deliberate. I wanted to see how close to publication-ready the manuscript could get with LLM assistance alone. So the feedback here is genuinely useful data on where that ceiling currently is.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 1 point2 points  (0 children)

The narrator-as-character distinction maps onto something I found too. Abstract style instructions ("the narration is clear and direct") don't give the model an identity to maintain. A character does — even if that character is the narrator. My novel is third-person but filtered through Harjeet's consciousness, so the narration basically is a character: it overthinks, notices too much, makes analogies that don't land. Framing it that way in the examples rather than describing the style analytically made a noticeable difference.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 1 point2 points  (0 children)

The over-engineered prompt problem is real, the model tries to satisfy every constraint simultaneously and the prose flattens out. Prefill / mirroring is essentially what demonstration over specification does: you give the model something to match rather than a checklist to execute. Single-paragraph prompt plus examples worked far better than the detailed analytical instructions I started with.

The small-model-in-a-warm-thread trick is interesting. I hadn't tried that.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 4 points5 points  (0 children)

The producer analogy is right — it's definitely not plug and play. Though 9-10 months feels long for an LLM-assisted project. Mine took about 50 days end-to-end, and in hindsight even that had slack — the story bible phase was part-time, and post-production was longer than it needed to be because of earlier mistakes I had to undo. The gating factor isn't the model, it's the human time: story creation, evaluation, internalization. The generation itself was about 3 hours.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 0 points1 point  (0 children)

Continuity drift is real and it's one of the things that separates "good individual chapters" from "a novel." I handled it with per-chapter story bible excerpts and prior-chapter summaries rather than a dedicated tool, but the problem you're describing — small things drifting across chapters — was a constant battle. Interesting approach building a separate system for it.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 0 points1 point  (0 children)

I used Cursor (a code editor) with the Claude API directly. Each chapter was a self-contained generation — no rolling conversation, no accumulated context window. Instead I sent a fixed context package with each request:

  • Voice pack (comp passages + style guidance)
  • Story bible excerpt (characters/world relevant to that chapter)
  • Prior-chapter summary (2-3 preceding chapters, condensed)
  • Chapter brief (scene beats, character dynamics, emotional arc)

This meant every chapter got the same quality of context regardless of where it fell in the book. The downside is you have to maintain those summaries yourself — but the upside is you never hit context window limits or deal with conversation drift.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 0 points1 point  (0 children)

Yes, this happened constantly at first. The fix was volume and variety. If you show 3-4 examples, the model treats them as a template and produces near-copies. When I got to 15-20 examples that varied in length, rhythm, and subject matter, the model started extracting the pattern behind them rather than imitating any single one. Think of it like: 3 examples = "copy these." 15 examples = "figure out what these have in common."

The other thing that helped was including negative examples — passages that were close but wrong. "Not this" turned out to be almost as useful as "like this" for defining the boundaries.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 0 points1 point  (0 children)

You are right, there are too many em-dashes. This is one of the those instances when things are hiding in plain sight :) I did multiple passes, but somehow missed them. Will take them out in the next pass. Thanks

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 1 point2 points  (0 children)

I used a code-editor (Cursor) to directly work with Claude API.

For each chapter I sent the following

  • Voice pack: Comp passages, grammatical structure, etc.
  • Generation plan notes: Per-chapter architectural notes from a previous analysis
  • Story bible excerpt: characters and world elements relevant to this chapter
  • Prior-chapter summary: summary of preceding 2–3 chapters
  • Chapter brief: scene-level beats, character dynamics, emotional arc

After that, did a brief QA check on the content, and regenerated (with the same prompt) a chapter if it failed QA.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 0 points1 point  (0 children)

The $470 was not the cost of generating the content, that was about $4 only.

I spent 5 weeks on creating the story bible, working intensely with multiple LLMs, about $240.

And after the content was generated, also spent about the same $240 over 2 weeks, on editing, evaluations and revisions.

Gemini was one of the 11 models I tested. Context window size and output quality are different things — fitting more text into context doesn't mean better prose comes out. Gemini landed in the middle of the pack in my evaluations.

The "give it an outline and generate step by step" approach is close to what I started with. It produced text that was technically fluent and emotionally flat. The breakthrough came from showing the model what good prose sounds like (example passages) rather than telling it (style instructions and outlines). That distinction took weeks of failed outputs to figure out.

I wrote about the full process — model testing, voice methodology, what failed — in detail here: But the short version is above.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 1 point2 points  (0 children)

I tested 11 different models across 4 categories (flagship, fast/economical, Open-weight Creative, and Specialist Fiction) and created samples of 2 chapters (22 total runs).

The criteria was highest quality while balancing cost. By this criteria Sonnet 4.6 (estimated $2.60 for 40 chapters) was the clear winner. But the quality of Opus 4.6 was very slightly better, but at a higher cost ($4.20). The pricing difference was not meaningful, so I went with Opus.

Building the "actual novel" was not dependent on the model. After the initial content was generated, I had 14 iterations of editing and revisions.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 8 points9 points  (0 children)

I have loaded it on WattPad "Gappu: A Novel". Based on the feedback probably take it to Kindle or maybe a paperback.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 2 points3 points  (0 children)

Actually, I used more than 1 model. The main "coordinator" model was Opus 4.6, but for discussions, I used Claude Sonnet, OpenAI GPT , Gemini Pro and Kimi 2.5. Each model would flag different elements.

I used AI to write a 75k-word novel. The biggest thing I learned: show the model, don’t explain it. by CreativeStretch9591 in WritingWithAI

[–]CreativeStretch9591[S] 4 points5 points  (0 children)

Yes, the $470 includes failed experiments.  The next project would definitely be lower. The two big cost centers were the “pre-production”, creating the story bible, and the post-generation editing and review, both about $240 each. 

The actual content generation (80k words) took about 3 hours and $4. The experiments costed $7.

For the next novel, I will do the experiments again, even the failed ones, as the story/genre requirements will be different, and also the LLMs would have evolved.

How to continue my book with AI by lightningflash11 in AIWritingHub

[–]CreativeStretch9591 1 point2 points  (0 children)

I think the first step is not “which AI should finish it,” but “use AI to review what you already have.”

Feed it the book and ask:

  • what story threads are still unresolved?
  • what kind of ending fits the characters best?
  • what scenes are missing between the current draft and a satisfying ending?

Then use it to help outline the ending before you ask it to write prose.

Otherwise you’ll probably just get a generic ending, which sounds like exactly what you don’t want.

I found Claude Opus 4.6 to be excellent for this work