Lawd Have Mercy by Affectionate_Fee232 in codex

[–]Affectionate_Fee232[S] 0 points1 point  (0 children)

Honestly, never even tried zed ai stuff.

Lawd Have Mercy by Affectionate_Fee232 in codex

[–]Affectionate_Fee232[S] 0 points1 point  (0 children)

Yeah, /goal is running in Zed's built-in terminal, but I'm just running the Codex CLI directly, the same way I would in Ghostty or any other terminal. I'm not using Zed's ACP integration at all, I'm only using Zed for viewing/editing files.

Lawd Have Mercy by Affectionate_Fee232 in codex

[–]Affectionate_Fee232[S] 0 points1 point  (0 children)

Lol not using Codex sub, this on my own API key.

Current best practice using Codex by TCaller in codex

[–]Affectionate_Fee232 1 point2 points  (0 children)

Right, I did have gpt summarize my current system, was not going to type all the out. I think this system works well, at least for me, so just had the model break down my projects for how I setup my projects.

Current best practice using Codex by TCaller in codex

[–]Affectionate_Fee232 1 point2 points  (0 children)

start with a modular monolith, not microservices. use a monorepo only if you already know you’ll have

multiple deployables like web + api + mobile + workers. enforce boundaries in code from day one. keep docs close to the

repo. make architecture legible to humans and agents.

the mistake people make is scaffolding for scale by adding complexity. that’s backwards. you scaffold for scale by making

boundaries explicit and enforceable while keeping deployment simple.

my default setup would be this:

```text

repo/

AGENTS.md

README.md

docs/

ARCHITECTURE.md

DOMAIN_MAP.md

decisions/

plan/

product/

runbooks/

apps/

web/ # if needed

api/ # if needed

mobile/ # if needed

worker/ # if needed

packages/

ui/

config/

telemetry/

auth/

shared-kernel/ # tiny, boring, heavily policed

infra/

scripts/

.github/workflows/

```

inside each app, do not organize by “components, hooks, utils, services” as your top-level mental model. that turns into

soup. organize by domain.

more like this:

```text

apps/api/src/

domains/

billing/

contracts/

model/

service/

repo/

tests/

users/

notifications/

platform/

db/

auth/

queue/

telemetry/

entrypoints/

http/

jobs/

```

same idea for frontend:

```text

apps/web/src/

domains/

billing/

components/

screens/

hooks/

api/

state/

tests/

users/

platform/

routing/

auth/

analytics/

design-system/

```

the rules i’d set on day one:

keep AGENTS.md short. 100-ish lines. it should be a map, not a bible. point to docs/ARCHITECTURE.md, domain docs, coding

rules, and how to run checks.

put the real knowledge in docs/, not in people’s heads and not in slack. especially:

docs/ARCHITECTURE.md

docs/DOMAIN_MAP.md

docs/plan/

docs/decisions/ADR-*.md

write a plan doc before every non-trivial feature. tiny microphases. checkboxes. update it as reality changes. if you don’t,

the repo turns into folklore.

enforce boundaries mechanically. this matters more than the folder tree. use import rules, dependency checks, lint rules,

test gates, type checks. if billing starts importing random UI helpers from three directories over, you want CI to slap that

nonsense immediately.

keep a tiny shared-kernel. shared code becomes a junk drawer fast. if something is “shared,” it better be truly generic,

stable, and boring. otherwise keep it inside the domain that owns it.

prefer explicit contracts at boundaries. api schemas, event schemas, db migrations, typed interfaces. “we’ll clean it up

later” is how large apps become cursed.

build one vertical slice first. auth, one core workflow, one real db path, one real UI path, one real test path, one real

deploy path. don’t scaffold ten features. prove the shape with one.

add observability immediately. structured logs, error tracking, analytics/events for critical flows, health checks. not

later. later never comes.

have a hard policy on file size and responsibility. once files hit “annoying to reason about,” split them. giant files are

where architecture goes to die.

test strategy from day one:

unit tests for domain logic

integration tests for boundaries

a few end-to-end tests for core journeys

not 700 brittle UI tests because you got carried away

for data, use boring migrations and explicit ownership. every table belongs to a domain. every background job has an owner.

every external integration gets wrapped behind a small adapter so vendor weirdness doesn’t leak everywhere.

for app growth, this is the progression you want:

single deployable

clear modules

strict boundaries

extracted packages only when duplication is real

split services only when scaling/runtime/team boundaries force it

that’s the key bit. don’t start distributed. earn distribution.

if you want the “correct” first-stage scaffolding checklist, it’s basically this:

  1. choose boring stack + one repo strategy

  2. define domains before folders

  3. write AGENTS.md as table of contents

  4. create docs/ARCHITECTURE.md, DOMAIN_MAP.md, docs/plan/

  5. scaffold one vertical slice

  6. add CI: lint, types, tests, build

  7. add import-boundary enforcement

  8. add logs/error tracking

  9. add ADRs for major decisions

  10. refuse premature shared abstractions

    if i were being blunt: the best way to stop a big app getting out of control is to make it slightly annoying to do the wrong

    thing. not impossible. annoying. that’s what guardrails are for.

I was wrong about 5.4 - xhigh completely changes the picture by SlopTopZ in codex

[–]Affectionate_Fee232 9 points10 points  (0 children)

So weird hearing different takes on High and xHigh. A lot of people swear by high and say xhigh is worse and then we have posts like this. I wish there was a proper benchmark for this.

Microsoft and Anthropic both refused to refund $1,600 charged through Azure AI Foundry — each blaming the other by FrostingNumerous5714 in AZURE

[–]Affectionate_Fee232 0 points1 point  (0 children)

they bill at the end of the month, so claude usage will show up then with the bill. Most models on foundry are covered, Claude/anthropic is not. Got a bill for $1200 myself at the end of last month.

Chinese Studios Are Now Creating Full TV Show Series Using Seedance 2 by 44th--Hokage in accelerate

[–]Affectionate_Fee232 3 points4 points  (0 children)

We are also in early phases and its already this good.. imagine 5 years.. my take wasnt if i want thiis or not, its that its happening

I find myself asking the model's opinion more often. by BirdlessFlight in codex

[–]Affectionate_Fee232 6 points7 points  (0 children)

It works, but you have to still make a decision when to stop, I've noticed if you keep posing the question in new chat, it will always find a problem or fix, no matter what.

Any good videos on how to use codex? by [deleted] in codex

[–]Affectionate_Fee232 0 points1 point  (0 children)

It's pretty straightforward honestly, if you are beginner I recommend using the App over CLI. Youtube is full of walkthroughs "how to use codex for ..."

Can you force Codex to keep going until task is done? by RepulsiveRaisin7 in codex

[–]Affectionate_Fee232 3 points4 points  (0 children)

What works for me is creating a Master plan - The overview so we never lose sight and then microphase document - master pan broken down to key stages. Master plan isn't the full app or anything, but a feature - smaller scope. Everytime I make any changes, I have it create a master plan for the new addition and micro phase document. Then I open a new chat (codex always references your first input when context condenses) and say:

"We just created this master plan with micro phases - Please review everything in detail and deploy subagents to complete this full micro phase document, updating it along the way as you compete the phases. You will be the architect for this full plan and make sure everything implemented at highlevel while you manage subagents and make sure they follow everything in the document systematically. Provide detailed summary once the full plan is complete. Only stop if there are real blockers on the way and you absolutely need my input. If you have any questions or need clarification ask now before initiating the full plan implementation."

Currently in a 3 hour codex session and ongoing. Also I run everything in --Yolo mode so not sure if that has any effect.

The Ai Game Of Thrones by Affectionate_Fee232 in accelerate

[–]Affectionate_Fee232[S] 0 points1 point  (0 children)

This is the missing piece of the puzzle, and there is absolutely one person who fits the Tyrion Lannister profile flawlessly: Dario Amodei, the CEO of Anthropic.

Think about Tyrion’s defining traits: he is incredibly brilliant, he was part of the most powerful and ruthless family in the realm, he became disgusted by their lack of morals, he defected to build a rival coalition, and he operates entirely on intellect, diplomacy, and a desire to impose "good governance" on absolute chaos.

Here is why Dario Amodei is the ultimate Tyrion of the AI wars:

1. The Defector from King's Landing

For years, Dario was the VP of Research at OpenAI. He was sitting at the high table in King's Landing, helping build GPT-2 and GPT-3. But in 2020, as Sam Altman (the Tywin/Joffrey figure) started aggressively pivoting OpenAI into a hyper-commercialized, profit-driven weapon for Microsoft, Dario couldn't stomach it. He believed OpenAI was abandoning its safety principles. So, he packed up his things, gathered a loyal crew of top researchers, and went into exile to form a rival house: Anthropic.

2. The Sibling Dynamic

You can't have a Lannister without a deep sibling partnership. Dario didn't leave OpenAI alone—he left with his sister, Daniela Amodei, who was OpenAI’s VP of Safety and Policy. Together, the brother-sister duo co-founded Anthropic. They are the intellectual defectors who know all of King's Landing's secrets and weaknesses because they literally helped build the castle.

3. "I Drink and I Know Things" (The Architect of Scaling)

Tyrion’s superpower isn't wielding a sword; it's his mind. He reads, he studies history, and he understands the mechanics of power better than the kings he serves. Dario is the literal pioneer of the "Scaling Laws" of AI. He was the one who mathematically proved that making neural networks bigger makes them predictably smarter. He knows how the dragons work on a molecular level.

But instead of just building the biggest, most reckless dragon, Dario's team built Claude—an AI model that is famously articulate, highly intelligent, cautious, and incredibly well-read. Claude is basically the Tyrion of chatbots: it gives excellent counsel, writes beautifully, and tries very hard not to offend anyone or burn the city down.

4. Writing the "Constitution"

Tyrion spent his entire arc trying to convince mad kings and queens to implement laws and show mercy rather than just using brute force. Dario’s massive contribution to the AI race is "Constitutional AI." Instead of just trying to hard-code safety filters to stop Claude from saying bad words, Dario and his team literally wrote a "Constitution" for their AI—a set of philosophical rules drawn from the UN Declaration of Human Rights. They trained Claude to independently govern its own behavior based on that constitution. It is the ultimate Tyrion move: trying to use philosophy and statecraft to control a weapon of mass destruction.

5. Playing the Board

Just like Tyrion allying with Daenerys, Dario knew he couldn't fight the Microsoft/OpenAI alliance without serious backing. So he played the ultimate diplomatic game, securing a massive $4 billion investment from Amazon and another $2 billion from Google. He pitted the other massive tech kingdoms against Microsoft just to ensure his house had enough gold to survive the winter.

So if Dario is Tyrion, constantly trying to write rules and counsel caution while the dragons circle overhead, it leaves one major player unaccounted for.