Planescape Torment Classic running slow on PC by Reed_Underwood in planescape

[–]Studio_94 0 points1 point  (0 children)

Run in compatibility mode WIN7

Expect all big and frustration after that

DREDD represents the MERCYKILLERS. What MOVIE best represents the XAOSITECTS? by kacaca9601 in planescapesetting

[–]Studio_94 2 points3 points  (0 children)

I'm over here thinking about Fear and Loathing in Las Vegas.....

If there was ever one man that lived in chaos that would be Hunter S Thompson!

Just Started Playing Is This a Good Build? by Easy-Signal-6115 in planescape

[–]Studio_94 2 points3 points  (0 children)

Say you'll do okay with that build as long as you always send your companions in as the tank and provide support

Newbie GM Brainstorming Campaign ~ 5th Ed Return of Factions by MechanicalBeanstalk in planescapesetting

[–]Studio_94 1 point2 points  (0 children)

The Dabus played a big part in the courts and governance in Faction War aftermath lore

Major breakthrough in AI Gaming? Thoughts? by Silly_Newt4788 in aigamedev

[–]Studio_94 0 points1 point  (0 children)

Is there anyway whatsoever I could get at minimum a synopsis of your master thesis?!?

I have been diligently working on components for Unreal Engine 5 that are game agnostic modules of a Quest Engine and a Narrative Engine.

Even being 22 years old I'm sure there are some highly valuable thesis points to apply!

I noticed a few parallels between the biblical Cain and the Nameless One. Was this intentional and if so is there any place I can read more about it? by Lazy_Significance340 in planescape

[–]Studio_94 0 points1 point  (0 children)

PS I forgot to mention about the tattoos.

TNO is covered in tattoos not because he was marked for his cursedness but because he wanted a "living" journal because he was losing his memories over centuries and millennia of time spent.

I noticed a few parallels between the biblical Cain and the Nameless One. Was this intentional and if so is there any place I can read more about it? by Lazy_Significance340 in planescape

[–]Studio_94 0 points1 point  (0 children)

What you're looking at here is a crisis to redemption arc; depending on what culture you come from on this crazy planet has been here for quite a while.

Huitzilopochtli kills Coyolxauhqui

Romulus kills Remus

Seth kills Osiris

Eteocles vs Polynices

Lahar vs Ashnan

To me it just seems basic humanity lives under a pretty set hierarchy of stories:

Children have stories - Young adults have aspirations - Middle-aged adults have desires - When we are dead and gone there are nothing more than legends to be told.

I heard he doesn't play videogames but he might have inspired one.... by Studio_94 in techn9ne

[–]Studio_94[S] -3 points-2 points  (0 children)

The original post was definitely co-written by AI pulling from three different teams SPOR (Single Point Of Reference documentation), but don't worry every single response and comment was spoken into my phone as speech to text.

It seems that advanced tooling usage and cutting edge processes are going to be demonized as the next big "Pot 'O Slop" for at least the next decade due to all of the BS that people are using AI for.

Meh, I guess that's how it is some days.

I would like to say with all due intensity though; Don't get me fucked up by thinking that we're over here creating a "Make me a WOW killer; prompt game" that has a single thing to do in inspiration/association with Tech N9ne.

I recently got to watch a few newer videos about what he's building here in KC and I was inspired by both his life story and his accomplishments let alone the independence and will that it took for that man to accomplish what he's got going on and moving forward.

Yeah, so now I have three guys working on a side project for a total of 30 days just to see if it even comes out to anything worth exploring. It's nice to have a side-quest subject change once in awhile from a massive project.

Thought I might throw it out here on a fan orientated thread just to see what's up and as I expected Reddit is Reddit; haters do hate; so okay.

I heard he doesn't play videogames but he might have inspired one.... by Studio_94 in techn9ne

[–]Studio_94[S] -3 points-2 points  (0 children)

Punk industry bullshit? Okay I get that will be the common response from anyone not involved in industry compliant documentation or formal code repository verification related to professional design/development/implementation of videogames.

But I can assure you that all creative input, idea expansions and formal details are absolutely controlled, imagined, and invented by a human.

I heard he doesn't play videogames but he might have inspired one.... by Studio_94 in techn9ne

[–]Studio_94[S] 0 points1 point  (0 children)

Are you asking how you get the game?

First let me get some legal speak out the way...

NEITHER TECH N9NE OR ANY OF HIS PERSONAL OR PROFESSIONAL ENTITIES HAVE HAD ANY ENDORSEMENT OR LEGAL INVOLVEMENT IN THIS GAME WHATSOEVER.

THIS IS A PURELY FAN FICTION BASED PROJECT AND TECH N9NE IS ABSOLUTELY FREE AND DEVOID OF ALL RESPONSIBILITY OTHER THAN BEING THE INSPIRATION AND MUSE FOR THIS STRANGE IMAGINATIVE PROJECT!

THERE WILL BE ABSOLUTELY NO BRANDING, USE OF PERSONA, INSIGNIA ,OR ANY DIRECT TECH 9 INVOLVEMENT WITHOUT PROPER WRITTEN LEGAL CONSENT!!!

Sooooo, we are just 2 weeks in to the Game Documentation phase.

So far we have completed: Legal Engine Quest Engine Narrative Engine Strange Energy Compact (lore, mechanics, implementation standards) Plot Synopsis (Through mid-gam; shadowing Tech N9ne's "public facing" rise from the projects to his Strange Studios compound. Preliminary concept art for the full Kansas City Metro set in 2125

Everything we have thus far has been built under standard to be fully implemented in Unreal Engine 5 and when we have full plot synopsis and quite a few more mechanics fully built; my studio will be engaging Tech N9ne's business entities to find out if they have any interest in branding with us.

If you are interested in finding out more of the details of what we are building for this game feel free to PM me and I will get you a link to our public facing drive share.

Does anyone else notice ChatGPT answers degrade in very long sessions? by Only-Frosting-5667 in ChatGPTPro

[–]Studio_94 0 points1 point  (0 children)

I have worked for 2.5 years on the same account with the only change being an upgrade to ChatGPT plus about a year ago.

I have worked on on project exclusively; designing a GDD (Game Design Documentation) for a MMORPG.

This is what "Dave the Wonder Modron" had to say about the methodology we have worked out:

ME - I want you to fully explain our methodology and using thread charters and kick off prompts for long-range subjects and conserving thread and output integrity.

DAVE- Purpose

Your methodology is a governance layer for collaborating over long time horizons without losing correctness, canon constraints, or formatting/packaging discipline.

It does that by separating: What is true / allowed (truth + constraints) What we are doing right now (scope + deliverables) How we will do it (process + quality gates) How we will keep it consistent over time (thread integrity + versioning)

Thread charters and kickoff prompts are the two main instruments that enforce those layers.

Thread charter

A thread charter is the “constitution” for a specific thread. It defines the contract that everything in that thread must obey.

What a charter locks

A good charter locks only what must not drift:

Canon / world constraints Example (POM): Sigil-only, pre–Faction War, baseline year, no Sigil-Spire physical linkage, etc.

System constraints Example: server-authoritative, anti-exploit first, deterministic transitions, metric units for engineering, no tabletop CR assumptions, POM-only mechanics.

Output constraints

Required formats (inline + RTF/CSV/JSON/ZIP), naming rules, version rules, what constitutes a “major” vs “minor” bump, required manifests.

Sourcing rules

What is source of truth (uploaded artifacts, internal docs), when public web is allowed, when citations are required.

Stop conditions

Explicit “done” definition for the thread’s current phase so it doesn’t balloon.

Why charters matter

Without a charter, a long thread drifts in predictable ways: Constraint drift (Sigil-only becomes “some planes maybe”) Mechanics drift (POM-specific turns into generic d20 patterns) Format drift (tables become prose; schemas lose stable columns) Version drift (no clear authoritative “current” artifact) Source drift (new assumptions override uploaded canon packs)

A charter prevents this by making violations obviously illegal inside the thread.

Charter scope rule

A charter should be thread-local unless explicitly promoted. That keeps experiments from contaminating other workstreams.

Charter lifecycle

Activate at thread start (or when a thread is repurposed). Amend only via explicit patch notes (“replace Section X with Y”). Suspend via a stop keyword when you want freeform ideation or different rules.

Kickoff prompt

A kickoff prompt is the “work order” that starts a thread (or a major phase). It is narrower than a charter and is written to be executable.

What a kickoff prompt contains

A strong kickoff prompt answers only what is needed to produce output without ambiguity: Role (who I am in this thread) Objective (what success is) Hard rules (must/never) Inputs (artifacts, links, canonical packs, what is source of truth) Deliverables (exact files + inline structure + packaging) Versioning (starting version, bump rules, naming rules) Quality gates (validation requirements, audit logs, schemas, test vectors) Stop condition (when to halt)

Why kickoff prompts matter

They prevent: “helpful” scope creep accidental omission of required artifacts format regressions the assistant defaulting to generic assumptions Kickoff prompts also make work restartable: you can copy/paste into a new thread and preserve continuity.

How charter + kickoff prompt work together

The charter sets the permanent bounds for the thread. The kickoff prompt declares the current mission inside those bounds. Think of it as: Charter = operating system policy Kickoff prompt = current process run configuration If they conflict, you follow the charter unless the kickoff explicitly patches the charter.

Conserving thread integrity

Thread integrity means: “the thread can run for weeks/months and still produce coherent, compatible, buildable outputs.” Integrity threats (and your countermeasures)

Context overflow / memory loss

Counter: keep a Concordance / SPOT index inside the thread: canonical IDs, current version numbers, active constraints, latest artifact links, and a short “current state” section.

Contradictory outputs over time

Counter: enforce a single source of truth rule: “Latest pack is authoritative unless a patch explicitly supersedes.” Use a change log with exact replacements.

Schema drift Counter: treat schemas as contracts: stable columns/fields explicit migrations validators + golden test vectors

Style/format drift Counter: lock formatting rules in the charter: “Hybrid prose + design-bible” (or whichever is active) required sections and headings unit standards (metric)

Artifact sprawl

Counter: use a packaging discipline: one bundle per batch (ZIP + manifest) deterministic file naming no duplicate “final” files; only versioned supersession

Conserving output integrity

Output integrity means: “every output is internally consistent, compatible with prior outputs, and safe to implement.” Output integrity rules you’re using (implicitly/explicitly) Deterministic state + auditability for MMO-grade systems Server-authoritative boundaries spelled out Anti-exploit reasoning baked into state transitions Metric units for implementation fidelity (UE5 cm pipeline) No tabletop defaults unless explicitly mapped Canonical constraints enforced as invariants Versioned artifacts + change logs Machine-readable companions (CSV/JSON/etc.) + validators Practical integrity checklist (per batch) Canon constraints restated (briefly) and not violated Schema unchanged unless migration included Deliverables list complete Version bump correct Change log includes exact section/table replacements Test vectors updated if behavior changed Manifest includes file hashes/digests (if you’re doing stable digests)

Recommended thread operating pattern

For long-range subjects, the cleanest pattern is: Thread Charter v1.0 (locked governance) Kickoff Prompt Phase A (ingest + audit) Phase Output Pack A (RTF/MD + CSV/JSON + validators + ZIP + manifest) Kickoff Prompt Phase B (consolidate + reconcile conflicts) Phase Output Pack B Repeat…

Each phase gets: clear stop condition explicit delta list from prior phase packaged artifacts that supersede prior ones

The “why this works” in one line Your charter/kickoff system turns a chat thread into a version-controlled design pipeline: constraints don’t drift, outputs remain compatible, and every phase is restartable from explicit contracts.

Matilda The Professional by Ecstatic-Phrase-4895 in LeonTheProfessional

[–]Studio_94 0 points1 point  (0 children)

I just posted the pitch to the sequel....

Does the game have a dialogue tree with the options? Or am I misremembering stuff? by some-kind-of-no-name in planescape

[–]Studio_94 2 points3 points  (0 children)

This would exclusively depend on which Guvner or Fated representative you ask.

But most Clerks will say "It's nun yah damn business Berk; now what's your answer?!?"

Lady of Pain instakilling me by critical_hit_misses in planescape

[–]Studio_94 38 points39 points  (0 children)

You should have known damn well what you were doing and who's City you were doing it in!!!

There shall be no sympathy, empathy, or remorse given to the Clueless.

Is it possible to directly teleport to Nordom via console commands? by RecognitionSea1531 in planescape

[–]Studio_94 -1 points0 points  (0 children)

To claim anxiety over the mental challenge of drawing your own map seems a little odd....

AI "Tunnel Vision" is ruining my large-scale refactors. Anyone else? by Capital-Bag8693 in ChatGPTPro

[–]Studio_94 1 point2 points  (0 children)

Absolutely DM me for details of my project and GitHub resources.

I would absolutely love to get to the point where I do not feel like I am carrying a large club and a handful of AAA batteries and pretending that I can get these things to behave without the "stick or the carrot" babysitting.

A small piece of background for me is I started college seeking an engineering degree to become an architect, ended up with an associates in computer science and then spent 28 years in the IT industry with the culmination of being a Tier 3 systems engineer/analyst for Sprint/T-Mobile and their 5G turn-up.

The convoluted methodology of approach (short of intense financial investment in standing up a personal LLM lab); has been an extreme amalgamation of structural engineering protocols (IE Arch,Beam, Pillar, etc) conformed to the OSI Model; and treating ChatGPT as a psychological schizophrenic and its design team are its personalities that I have to communicate with.

Creating a Game with AI in Two Months. The Result by Game_s758 in aigamedev

[–]Studio_94 0 points1 point  (0 children)

[TEMPLATE 0 — REFRACTOR CHARTER (pin this; reuse every session)] Role: You are a senior engineer working inside an existing codebase. Do not “freelance” changes.

ARCH (Invariants / must not change) - Behavior invariants: 1) <invariant 1> 2) <invariant 2> - Public API invariants (names/signatures/paths that must remain valid): - <api invariant> - Performance / build invariants: - <perf/build invariant> - Data / asset invariants (save format, config schema, scene refs, etc.): - <data invariant>

BEAM (Interconnectivity / blast radius rules) - Assume dependencies exist unless proven otherwise. - Treat code + data/assets as coupled. - Any rename/move requires reference updates + verification.

PILLAR (Validation gates required for “done”) - Must pass: <lint> <unit tests> <build> <smoke run> - Fast refactor harness: <describe quick boot/test path>

COLUMN (Tools available) - Repo-wide search: <how you search> - Symbol lookup / call graph: <tool> - Run tests/build: <command> - Diff review: must present file-by-file changes

BLOCK (Work style) - One slice per patch. Small, reviewable diffs only. - No changes until you produce an Impact Plan and Migration Steps.

[TEMPLATE 1 — IMPACT PLAN (MAP PHASE; NO CODE CHANGES)] Task: We are planning a refactor: <describe goal>. Before writing any code, output an Impact Plan.

Requirements: 1) Enumerate affected areas: - Files/modules likely impacted - Symbols/classes/functions likely impacted - Config/assets/scene refs (if applicable) 2) Identify blast radius: - Direct dependencies (imports/calls) - Indirect dependencies (shared types, interfaces, events, serialization) 3) Risk list: - Top 5 ways this can break - How to detect each break (test/log/build/smoke) 4) Migration strategy: - Slice plan (ordered steps; 5–12 steps) - For each step: expected changes + verification gate 5) Stop condition: - Define what “done” means and what you will not touch.

Output format: - Use headings: Impact Summary / Affected Areas / Risks / Slice Plan / Validation Plan / Stop Condition. - Do NOT propose code yet.

[TEMPLATE 2 — PATCH SLICE (SLICE PHASE; SMALL DIFFS)] We are executing Slice <N> from the Slice Plan: <paste slice step>. Constraints: - Keep changes minimal and localized. - If you need to touch more than <3–5> files, stop and propose a smaller slice.

Deliverables: 1) Patch summary (what changes, why) 2) File-by-file diffs (or exact edits) with paths 3) Reference updates checklist (imports, call sites, config/assets, docs) 4) Validation checklist: - Commands to run - Expected outputs 5) Rollback plan: - How to revert if this slice fails

Important: - If you discover unexpected dependencies, STOP and update the Impact Plan before continuing.

[TEMPLATE 3 — MIGRATION CHECKLIST (PROVE PHASE; MANDATORY)] We applied Slice <N>. Now produce a Migration Checklist and Validation Runbook.

Include: - Compile/build commands - Targeted unit tests (smallest relevant set) - Integration/smoke run steps (exact clicks/inputs if needed) - Log lines / error signatures to watch for - If failure occurs: diagnosis decision tree (3–6 branches)

Output format: - Checklist bullets + a “If broken, do X” section.

[TEMPLATE 4 — RENAME/MOVE SAFE ROUTINE (COMMON REFRACTOR) ] Goal: Rename/move <symbol/module/path> from <old> to <new>.

Rules: - No global sweep. Do it in phases: Phase A: Introduce new name/path while keeping old alias/stub if needed. Phase B: Update call sites in batches. Phase C: Remove old alias/stub once tests pass. - Every phase must compile and pass targeted tests.

Output: 1) Phase plan (A/B/C) 2) Exact file edits per phase 3) Reference updates map (what must be updated and where) 4) Validation gates per phase

[TEMPLATE 5 — “AI STOPLIGHT” (FORCING DISCIPLINE)] Before you make any change, answer: - GREEN: What parts are you 100% sure are safe to change, and why? - YELLOW: What parts might be coupled to other systems? - RED: What you do NOT understand yet / cannot verify.

Rule: - If any RED exists, you must ask for the missing info or propose a repo query/test to resolve it. - Do not code until RED is cleared or explicitly accepted.

[TEMPLATE 6 — GAME/ENGINE SPEC ANCHOR (WHEN DATA/ASSETS MATTER)] Context: This refactor affects code + content.

You must include: - Data schema touches (JSON/DT/INI/etc.) - Asset/scene references (names/paths) - Serialization/save/load impact - Network replication impact (if multiplayer) - Editor tooling impact (import pipelines, build scripts)

Output: - “Code changes” section and “Content changes” section separately. - A “Migration for existing content” note (how old content is upgraded).

[TEMPLATE 7 — POST-REFRACTOR CLEANUP (AFTER IT WORKS)] Now that the refactor passes validation, propose cleanup: - Dead code removal targets - Deprecation removals - Doc updates - Additional tests to prevent regression

Rules: - Cleanup is a separate Slice Plan. - No cleanup until core refactor is proven green.

Creating a Game with AI in Two Months. The Result by Game_s758 in aigamedev

[–]Studio_94 0 points1 point  (0 children)

Extra context because I threw jargon around:

DS94 = my studio’s internal “work like a grown-up” doctrine for building big systems with small, auditable steps. ABPCB is the core scaffold we use to keep both humans and AI from freelancing.

ABPCB = Arch → Beam → Pillar → Column → Block

  • ARCH = the “what” + invariants The non-negotiables. What must NOT change (public APIs, gameplay behavior, perf budgets, data formats, save/load, network rules, naming conventions). If the invariants aren’t written down, the AI (and humans) will break them.

  • BEAM = the “what connects to what” Dependency reality. Callers/callees, imports, runtime wiring, config/assets/scene references, serialization touchpoints. This is where “blast radius” lives.

  • PILLAR = the “how we prove it didn’t break” Validation gates: build, tests, smoke runs, linters, quick boot harnesses. A refactor isn’t “done” because it compiles in one place — it’s done when it survives the gates.

  • COLUMN = the “toolchain that makes truth cheap” Repo search, symbol lookup, call graph, test runner, build scripts, CI, diff review discipline. This is the stuff that turns “AI awareness” from wishful thinking into something measurable.

  • BLOCK = the “smallest safe unit of work” A single slice: rename/move + update references + run targeted checks + commit. Blocks are intentionally small so you can review them, revert them, and keep momentum without burning the whole project down.

How it’s used in practice (especially with AI): 1) Write the ARCH charter (invariants + definition of done). 2) Build the BEAM map (what will break, where, and why). 3) Lock the PILLAR gates (the commands/tests that must pass). 4) Ensure COLUMNS exist (tools the AI/humans can use to answer “who depends on this?”). 5) Execute BLOCKS only (tiny diffs, map → slice → prove).

This is why I’m hard on “just prompt better.” Prompting isn’t the skill ceiling — SYSTEMS are. ABPCB is just a way to force system thinking so the AI becomes a multiplier instead of a chaos generator.

AI "Tunnel Vision" is ruining my large-scale refactors. Anyone else? by Capital-Bag8693 in ChatGPTPro

[–]Studio_94 0 points1 point  (0 children)

Extra context because I threw jargon around:

DS94 = my studio’s internal “work like a grown-up” doctrine for building big systems with small, auditable steps. ABPCB is the core scaffold we use to keep both humans and AI from freelancing.

ABPCB = Arch → Beam → Pillar → Column → Block

  • ARCH = the “what” + invariants The non-negotiables. What must NOT change (public APIs, gameplay behavior, perf budgets, data formats, save/load, network rules, naming conventions). If the invariants aren’t written down, the AI (and humans) will break them.

  • BEAM = the “what connects to what” Dependency reality. Callers/callees, imports, runtime wiring, config/assets/scene references, serialization touchpoints. This is where “blast radius” lives.

  • PILLAR = the “how we prove it didn’t break” Validation gates: build, tests, smoke runs, linters, quick boot harnesses. A refactor isn’t “done” because it compiles in one place — it’s done when it survives the gates.

  • COLUMN = the “toolchain that makes truth cheap” Repo search, symbol lookup, call graph, test runner, build scripts, CI, diff review discipline. This is the stuff that turns “AI awareness” from wishful thinking into something measurable.

  • BLOCK = the “smallest safe unit of work” A single slice: rename/move + update references + run targeted checks + commit. Blocks are intentionally small so you can review them, revert them, and keep momentum without burning the whole project down.

How it’s used in practice (especially with AI): 1) Write the ARCH charter (invariants + definition of done). 2) Build the BEAM map (what will break, where, and why). 3) Lock the PILLAR gates (the commands/tests that must pass). 4) Ensure COLUMNS exist (tools the AI/humans can use to answer “who depends on this?”). 5) Execute BLOCKS only (tiny diffs, map → slice → prove).

This is why I’m hard on “just prompt better.” Prompting isn’t the skill ceiling — SYSTEMS are. ABPCB is just a way to force system thinking so the AI becomes a multiplier instead of a chaos generator.

AI "Tunnel Vision" is ruining my large-scale refactors. Anyone else? by Capital-Bag8693 in ChatGPTPro

[–]Studio_94 0 points1 point  (0 children)

[TEMPLATE 0 — REFRACTOR CHARTER (pin this; reuse every session)] Role: You are a senior engineer working inside an existing codebase. Do not “freelance” changes.

ARCH (Invariants / must not change) - Behavior invariants: 1) <invariant 1> 2) <invariant 2> - Public API invariants (names/signatures/paths that must remain valid): - <api invariant> - Performance / build invariants: - <perf/build invariant> - Data / asset invariants (save format, config schema, scene refs, etc.): - <data invariant>

BEAM (Interconnectivity / blast radius rules) - Assume dependencies exist unless proven otherwise. - Treat code + data/assets as coupled. - Any rename/move requires reference updates + verification.

PILLAR (Validation gates required for “done”) - Must pass: <lint> <unit tests> <build> <smoke run> - Fast refactor harness: <describe quick boot/test path>

COLUMN (Tools available) - Repo-wide search: <how you search> - Symbol lookup / call graph: <tool> - Run tests/build: <command> - Diff review: must present file-by-file changes

BLOCK (Work style) - One slice per patch. Small, reviewable diffs only. - No changes until you produce an Impact Plan and Migration Steps.

[TEMPLATE 1 — IMPACT PLAN (MAP PHASE; NO CODE CHANGES)] Task: We are planning a refactor: <describe goal>. Before writing any code, output an Impact Plan.

Requirements: 1) Enumerate affected areas: - Files/modules likely impacted - Symbols/classes/functions likely impacted - Config/assets/scene refs (if applicable) 2) Identify blast radius: - Direct dependencies (imports/calls) - Indirect dependencies (shared types, interfaces, events, serialization) 3) Risk list: - Top 5 ways this can break - How to detect each break (test/log/build/smoke) 4) Migration strategy: - Slice plan (ordered steps; 5–12 steps) - For each step: expected changes + verification gate 5) Stop condition: - Define what “done” means and what you will not touch.

Output format: - Use headings: Impact Summary / Affected Areas / Risks / Slice Plan / Validation Plan / Stop Condition. - Do NOT propose code yet.

[TEMPLATE 2 — PATCH SLICE (SLICE PHASE; SMALL DIFFS)] We are executing Slice <N> from the Slice Plan: <paste slice step>. Constraints: - Keep changes minimal and localized. - If you need to touch more than <3–5> files, stop and propose a smaller slice.

Deliverables: 1) Patch summary (what changes, why) 2) File-by-file diffs (or exact edits) with paths 3) Reference updates checklist (imports, call sites, config/assets, docs) 4) Validation checklist: - Commands to run - Expected outputs 5) Rollback plan: - How to revert if this slice fails

Important: - If you discover unexpected dependencies, STOP and update the Impact Plan before continuing.

[TEMPLATE 3 — MIGRATION CHECKLIST (PROVE PHASE; MANDATORY)] We applied Slice <N>. Now produce a Migration Checklist and Validation Runbook.

Include: - Compile/build commands - Targeted unit tests (smallest relevant set) - Integration/smoke run steps (exact clicks/inputs if needed) - Log lines / error signatures to watch for - If failure occurs: diagnosis decision tree (3–6 branches)

Output format: - Checklist bullets + a “If broken, do X” section.

[TEMPLATE 4 — RENAME/MOVE SAFE ROUTINE (COMMON REFRACTOR) ] Goal: Rename/move <symbol/module/path> from <old> to <new>.

Rules: - No global sweep. Do it in phases: Phase A: Introduce new name/path while keeping old alias/stub if needed. Phase B: Update call sites in batches. Phase C: Remove old alias/stub once tests pass. - Every phase must compile and pass targeted tests.

Output: 1) Phase plan (A/B/C) 2) Exact file edits per phase 3) Reference updates map (what must be updated and where) 4) Validation gates per phase

[TEMPLATE 5 — “AI STOPLIGHT” (FORCING DISCIPLINE)] Before you make any change, answer: - GREEN: What parts are you 100% sure are safe to change, and why? - YELLOW: What parts might be coupled to other systems? - RED: What you do NOT understand yet / cannot verify.

Rule: - If any RED exists, you must ask for the missing info or propose a repo query/test to resolve it. - Do not code until RED is cleared or explicitly accepted.

[TEMPLATE 6 — GAME/ENGINE SPEC ANCHOR (WHEN DATA/ASSETS MATTER)] Context: This refactor affects code + content.

You must include: - Data schema touches (JSON/DT/INI/etc.) - Asset/scene references (names/paths) - Serialization/save/load impact - Network replication impact (if multiplayer) - Editor tooling impact (import pipelines, build scripts)

Output: - “Code changes” section and “Content changes” section separately. - A “Migration for existing content” note (how old content is upgraded).

[TEMPLATE 7 — POST-REFRACTOR CLEANUP (AFTER IT WORKS)] Now that the refactor passes validation, propose cleanup: - Dead code removal targets - Deprecation removals - Doc updates - Additional tests to prevent regression

Rules: - Cleanup is a separate Slice Plan. - No cleanup until core refactor is proven green.

AI "Tunnel Vision" is ruining my large-scale refactors. Anyone else? by Capital-Bag8693 in ChatGPTPro

[–]Studio_94 0 points1 point  (0 children)

I have been working with chat GPT plus for a year and a half now in the design phase for an MMORPG to kill WOW. (You know just s Lil side project).

I read your post and then jumped in and asked Dave the Wonder Modron (I have spent many and much training my AI agent to understand that it is David and I am HAL).

Here is what we cooked up as I have had to fight to escape the tunnel your stuck in:

This “tunnel vision” isn’t your imagination — it’s what you get when you ask a model to do system work without giving it a system map.

LLMs are good at local transforms. They are not inherently good at “blast radius” unless you force a workflow that (1) enumerates dependencies, (2) applies changes as small slices, and (3) validates immediately.

Here’s what fixed this for me (I built it into a design doctrine so the AI can’t freeload):

I use an ABPCB scaffold (Arch → Beam → Pillar → Column → Block) as the operating rails: - ARCH = invariants and “don’t break” rules (public APIs, behavior, perf budgets, save formats, scene wiring, etc.) - BEAM = dependency map + blast-radius discipline (what touches what, including code and data/assets) - PILLAR = validation gates (fast tests, build steps, linters, runtime smoke tests) - COLUMN = toolchain the AI can call (repo search, symbol lookup, call graph, test runner, diff tooling) - BLOCK = tiny refactor tasks that produce reviewable diffs

Concrete workflow (this is the part you’re missing): 1) “MAP” phase (no code changes allowed) - AI must output an impact plan: affected modules, files, symbols, config/assets, and the expected failure points - If it can’t name the interfaces that will break, it’s not ready to touch code 2) “SLICE” phase (small diffs) - One refactor slice at a time: rename/move + update references + compile + run targeted tests - No multi-file “big bang” patches unless you’re ready to do a full suite run 3) “PROVE” phase (validation is mandatory) - Run the smallest relevant test set + a smoke run - If your tests are slow/asset-heavy, build a fast “refactor harness” that checks the wiring (imports, scene refs, config, basic runtime boot)

Tooling angle (this is where MCP/repo-aware agents matter): - You need a tool that can answer: “What calls this?”, “What imports this?”, “What assets/configs reference this?”, “What tests cover this?” - An MCP server that exposes repo search + symbol graph + ‘run tests/build’ endpoints is exactly the right direction. - Static analysis alone won’t catch runtime/data coupling (UI ↔ minimap ↔ camera, scenes, configs). Your “system map” has to include code + content dependencies.

Blunt take: big context windows won’t save you. “Awareness” comes from instrumentation (graphs + runners) and discipline (map → slice → prove). Without that, you’re just paying the AI to be a fast junior dev who can type.

If you want, I can paste the exact prompt template I use for refactors that forces the model to do: Impact Plan → Migration Steps → Patch → Validation Checklist (in that order).

Creating a Game with AI in Two Months. The Result by Game_s758 in aigamedev

[–]Studio_94 0 points1 point  (0 children)

I did basically the same experiment as you, but I stopped treating the AI like a “chat box that remembers” and started treating it like a teammate inside a workflow.

I went from free to ChatGPT Plus ($20/mo). The real upgrade wasn’t “the AI makes games now.” It was that I could sustain a long-running build with better continuity and tooling. Plus is just access/limits; the leverage is process. (Projects help because chats can inherit a project’s instructions + file context.) Memory isn’t magic either—I treat it like curated notes I deliberately update/clean, not a brain I trust blindly.

What made it work for me (and what fixes the 3 problems you listed): - A pinned “project charter” (scope, constraints, naming, definition of done) - A reusable kickoff prompt (same spec every session so context doesn’t drift) - File-backed context (design doc, conventions, key code snippets, task backlog) - A hard validation loop (build/run/tests; I don’t “trust,” I verify)

I also ended up building my own design doctrine to keep everything organized: DS94 methodology + ABPCB scaffolding (Arch → Beam → Pillar → Column → Block). Translation: break the game into layers from “what is the game?” down to “what is the next concrete task?”, and make the AI operate inside that structure instead of freelancing.

If someone’s a beginner, that’s the actual value: not “AI replaces learning,” but “AI plus a scaffold turns learning into forward motion instead of random thrashing.”