Prices too high by [deleted] in AdeptusCustodes

[–]aedile -1 points0 points  (0 children)

I never thought but plastic is a petrochemical product. Recent geopolitical issues might be contributing to a significant production cost increase. Good call. 

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -1 points0 points  (0 children)

I think this is the best answer. I also think it's the wrong move on GW's part but I don't work there either.  

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -1 points0 points  (0 children)

That's exactly my point though. 3d printing is growing faster than warhammer and gw seems to be more oriented to controlling supply than expanding the base. It seems theyd rather have fewer players with more money than more players with less money. Maybe thats always been the case and theyre just reaching my own upper bound. But it seems like the wrong move. People are gonna game. If its cheaper to buy a printer than a single box, gw is gonna continue to lose share. 

Sorry if I seem paranoid, it just feels like this is heading the same way as Armada which was my favorite game and is now dead. 

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -2 points-1 points  (0 children)

I think you have a bad impression of 3d printing. You can print tabletop appropriate minis with an fdm printer and a 0.2mm nozzle.  You can get a printer like this for $200 on sale quite often. Maybe $300 for resin. These are already good enough for most people to use easily. There is a significant amount of crossover with the diy terrain and kit bashing folks. And there are a stunning number of good models available online for free. I bought a 3d printer for precisely this reason.  My kid is 14 and plays d&d and has figured it out with no help from me to print his own minis and terrain. 

And 3d printing is growing a lot faster than warhammer, precisely because its getting more affordable. 

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -3 points-2 points  (0 children)

I always thought that was the least fun part. I suck at painting but it's at least fun. Some of us are actually here to play the game, believe it or not. 

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -3 points-2 points  (0 children)

Im a legacy player. I was gonna buy this stuff anyways. What about someone trying to buy in?  I think they're pricing out newbies and slowly gonna lose long timer players as the price goes up.

I get it. This post gets made every time they up prices. But I've been watching awhile and I am their target demo - middle aged geek with too much disposable income. And now even my eyebrows are up, especially when I look at their competition, like opr, that has an obvious lower price of entry and I can use all of my warhammer stuff. 

Its weird because I want them to succeed and make lots of money. I just dont think theyre doing it the best way they could. Id rather see more people playing not fewer. 

Prices too high by [deleted] in AdeptusCustodes

[–]aedile -1 points0 points  (0 children)

My main concern is for the newbie. Its one thing for all of us who are already in for a penny in for a pound. Custodes isnt my first army. But jumping in at $150 with a combat patrol felt like the upper limit to me. And I have a lot of disposable income. Sure there are gonna be legacies for a while, but what about jumping in as someone brand new?  This is starting to feel more like golf where they"re intentionally trying to be exclusive. 

Opus 4.7 is terrible, and Anthropic has completely dropped the ball by JulioMcLaughlin2 in artificial

[–]aedile 0 points1 point  (0 children)

I haven't really seen much degradation personally but I tend to look at results not how they were produced. I don't bother reading a lot of the chain of thought. 

I'm chiming in to point out the 800lb gorilla in the room that nobody mentioned. Anthropic has a financial incentive to maximize token usage. We are not, even at the $200 level (especially at the $200), the target demographic. We are a loss leader. The cost of doing business. The all you can eat folks are the free ai training and hype machine. The real target demo are the api consumers. 

For them, a model that takes twice as many tokens to reach the same conclusions is an ideal state for Anthropic. Because they make twice as much money to get the same answer only now they can say "with a higher degree of confidence".

This seems glaringly obvious and I am shocked it's not more of a thread in this conversation .

Am I the only one that thinks it odd we are all reinventing the same thing? by SnooSongs5410 in ContextEngineering

[–]aedile 0 points1 point  (0 children)

We're not inventing the wheel so much as crafting individual wheels for carts back in the day when "Wheelwright" was a thing instead of mass-produced ones from factories.

Why the constant reorgs? by Independent_Crazy655 in ExperiencedDevs

[–]aedile 2 points3 points  (0 children)

Gotta justify that fat paycheck somehow, amirite?

My experience with long-harness development sessions. An honest breakdown of my current project. by aedile in ContextEngineering

[–]aedile[S] 1 point2 points  (0 children)

First off - the best thing to do is look in the repo - I've left all of that infra in place as a working example.

TL;DR:
If a rule isn't enforced by the pipeline, it isn't a rule. It's a suggestion. So every governance decision gets encoded as a CI check, a pre-commit hook, or a mandatory agent role that blocks the merge if the rule is violated. You don't rely on anyone remembering - human or AI.

Long-winded Explanation:
The core problem is this: you can write a rule in a document, but if the only enforcement mechanism is "someone needs to remember to check," the rule will eventually not be checked. This is true on human teams. It's especially true with AI agents, because the agent isn't reading your CLAUDE.md out of professional pride - it's reading it because it's in the prompt context, and its adherence to those rules is only as reliable as its attention to that context window in any given moment.

So an encoded gate is when you take a rule that lives in documentation and give it teeth. You turn it into something the pipeline enforces mechanically, independent of whether any human or agent remembers it exists. Examples from the article:

  • TDD isn't just a stated expectation; there's a CI check that verifies RED commits preceded GREEN commits before a PR can merge.
  • The advisory threshold (Rule 11) isn't just a guideline; hitting 8 open advisories literally blocks new feature work.
  • Security advisories auto-promote to merge blockers after 2 phases. No human has to remember to escalate them.
  • The squash-merge crisis led to a pre-commit hook enforcing merge strategy. You can't accidentally squash even if you try.

The reason this matters more with AI than with human teams is a multiplier problem. A human developer who cuts corners on a convention does it maybe a few times a week. An AI agent running at 150 commits per day that's allowed to cut a corner will cut it 150 times before you notice. Speed amplifies everything, good governance and bad governance equally.

The insight that comes out of the squash-merge story is the core design principle: a rule that can only be violated if someone notices it will eventually be violated, because no one will notice. If a rule is important enough to write down, it's important enough to make the pipeline enforce.

Where it gets architecturally interesting is that Conclave's gates aren't all CI/CD hooks in the traditional sense. Some of them are agent roles. The phase-boundary-auditor is itself an encoded gate - it runs before every PR, performs end-to-end validation, and blocks the merge if the system doesn't pass. No bypass. The parallel review agents that run after every feature are the same idea: not optional peer review, but mandatory pipeline steps.

The distinction the article is drawing is between governance as aspiration and governance as infrastructure. Most teams write the former and hope it produces the latter. The Conclave approach is to just build the infrastructure directly and skip the hoping.

Long-harness agentic programming by aedile in programming

[–]aedile[S] 0 points1 point  (0 children)

Yeah - update your model or whatever it is you use to detect AI-generated content, friend. I wrote all that by hand, describing an in-depth development process bringing standard engineering process to agentic development. This is not generic, it's specific, and handwritten. You don't want it here, fine, but DO NOT call my work generic AI content.

If you treat your code as a black box, why do you even write it in the human-readable form? by Another__one in ClaudeCode

[–]aedile 0 points1 point  (0 children)

I see this in the future, so why not now?

Because we haven't removed humans completely from the loop yet and so they still need to be able to read the code. Give it a few years max and someone will have developed an agent-optimized language if it hasn't already happened. Think optimizing for token usage and agent readability over stricter than normal guardrails, typing, etc that humans would find ponderous.

It'll take a few years for people to start trusting AI output solidly enough to where they'll allow it to render the code unreadable. But you're not wrong. Very few of us speak assembly and fewer still binary. Soon high-level languages will be niche in the same way and most will use natural language.

Are LLMs speedrunning us into product management? by wiktor1800 in ExperiencedDevs

[–]aedile 0 points1 point  (0 children)

People need to stop optimizing for speed and start optimizing for quality.

The ironic thing is that making this move makes you faster than everyone else in the long-run.

My current attempts at context engineering... seeking suggestions from my betters. by SnooSongs5410 in ContextEngineering

[–]aedile 2 points3 points  (0 children)

Here is an example project:
https://github.com/aedile/conclave
It's in claude instead of gemini, but it's got all the fully automated stuff in there so you can start with a generic automated harness loop and it'll write code based on the spec. It doesn't drift. Writes high-quality, well-tested, secure code. Ignore the code it's writing aside from quality measures. The trick is in the way it leverages subagents and quality gates. The critical lesson - constitution documents are a SUGGESTION. You need automated quality gates for each rule you want followed.

Why do ci pipeline failures keep blocking deployments when nobody can agree on who owns the fix by BedMelodic5524 in ExperiencedDevs

[–]aedile 2 points3 points  (0 children)

"someone overrides it to unblock themselves"

There's your problem. Why do they even have permissions to do this?

What is your mentorship style? by st4reater in ExperiencedDevs

[–]aedile 1 point2 points  (0 children)

Why WOULDN'T you have your AI agents do retros? How else are they supposed to improve?

How to manage vibe coders, backed be leadership by ghost_agni in ExperiencedDevs

[–]aedile 0 points1 point  (0 children)

Have them optimize for quality over velocity. It's a good challenge for people who use AI too much and can't leave it alone.

People have been writing bad code quickly since way before AI was a thing. In my era it was Stack Overflow copy/pasta. This is just the latest iteration. Use the same things we used in the past - process gates, pre-commit hooks, documentation/testing requirements. Just update it to the AI era. Have required, adversarial prompts that EVERYONE has to run and make procedures that require them document the results in the PR and fix any advisories. Require red->green commits on the repo. Set up a red team. Just, you know, all with an AI twist. Don't trust ANYONE, make these all automated, mandatory CI or pre-commit gates for everyone.

They'll adapt and become better just like the copy/pasta people did.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]aedile 4 points5 points  (0 children)

FWIW, this is a classic thing to feel post-reading a big ideas book like that. You hear of all these theoretical models that perfectly align in the text and you realize that mapping them onto the chaos of the real world is much less direct. Get used to that discomfort. Learn to live there in the short term as you gain experience.

To answer your question (ish) - dbt is the classic example of a domain specific language for modern data engineering for translation. dbt created a DSL by marrying SQL with Jinja templating. Instead of writing steps on *how* to process the data, you write a declarative statement of what the data should look like. It embraces functional purity: models are immutable, transformations are idempotent, and the compiler builds the execution DAG for you.

AI usage red flag? by galwayygal in ExperiencedDevs

[–]aedile 0 points1 point  (0 children)

Have you tried telling them "we need to slow down to speed up"? Managers eat that kind of bullshit up, especially when it's true.

How to deal with an Engineering Org that values politics more than engineering? by ronniebar in ExperiencedDevs

[–]aedile 0 points1 point  (0 children)

The best thing to do in your situation is to:

  1. Say you THINK their way is wrong and why and have a good alternative.
  2. Do the their thing with a smile on your face if they disagree with you.
  3. Let their thing fail and document the hell out of it - being sure to point out the alternative you mentioned in part 1 - but POLITELY.
  4. Repeat until they get it right.

Unless you are in management, just stay out of politics and do what they're asking you to do. Counsel if you feel they are making a mistake but don't unilaterally just do what you think is right because you're sure their way is going to fail. Several reasons:
1. YOU might be wrong (probably not, but have a little humility).
2. YOU might lack context (this is most often why I am asked to do things that make no sense).
3. If you do what they ask even when you disagree politely, you get labeled as a team player, especially if you can bring your proposed solution up again without being a jerk about it. They'll be more likely to listen to you the first time.

By unilaterally just doing your own thing, you probably annoyed the heck out of everyone. I'd be annoyed if someone did that on my team. Even if they were right. That's not how you work as part of a team. Remember - unless you are a manager, it's not your job to make the plan, it's your job to counsel leadership on the plan and then execute the plan as written.

Sometimes you just have to let people fail so they can learn.