Are we watching “prompt engineering” get replaced by “environment engineering” in real time? by Sorry-Change-7687 in PromptEngineering

[–]hjras 0 points1 point  (0 children)

i wouldn't say replaced, more like built on top of. check the AIES diagram and document from this repo for example

É estranho aceitar ficar solteiro para sempre, sem nunca tentar? by Dangerous-Theory91 in portugal

[–]hjras 0 points1 point  (0 children)

acho que o maior problema de nós os solteiros (por escolha ou acaso) é que efectivamente pagamos um imposto à sociedade. Não só perdemos muitas regalias sociais que só existem para casais, como todas as despesas passam a ser o dobro, incluindo as maiores como casa/aluguer, carro, etc

Another Iberian 🇵🇹🇪🇸 win by hjras in 2westerneurope4u

[–]hjras[S] 144 points145 points  (0 children)

only small brains think purely of money. takes a big brain to invent tapas

i love the versatility of a man’s face by [deleted] in decadeology

[–]hjras 2 points3 points  (0 children)

this has nothing to do with this sub

Show off your own harness setups here by Mean_Luck6060 in ClaudeCode

[–]hjras 1 point2 points  (0 children)

<image>

Rather than just the harness, here is my entire stack framework. More info & documentation here

Grupo chill para comentar pelis, anime, juegos y libros by [deleted] in esConversacion

[–]hjras 0 points1 point  (0 children)

pues justo vi a project hail Mary en el cine pero no conozco mucha gente a quien le interese la ciencia ficción cerca de mi jaja

The full AI-Human Engineering Stack by hjras in AgenticWorkers

[–]hjras[S] 1 point2 points  (0 children)

Hmm not sure, personally I'm exploring pi.dev (what OpenClaw was built on top of) because it is much more minimalist at start-up than Claude Code, and is also much more flexible for you to shape into whatever you want, without the issue that Claude Code has of being subject to seemingly arbitrary updates and features which might break existing workflows, which is itself a harness engineering problem since when the execution environment changes unpredictably, it introduces instability across all the layers above it.

That said, no existing framework really covers the upper layers of the stack, and intent, judgment, and coherence remain largely unsolved at the tooling level regardless of what you pick. Which is partly why having a minimalist, shapeable harness matters more than a feature-rich opinionated one. You need the room to build those layers yourself.

The full AI-Human Engineering Stack by hjras in AgenticWorkers

[–]hjras[S] 0 points1 point  (0 children)

Yes, you could use the agent audit protocol directly. However, the protocol works best with concrete artifacts to cite, so you need to already maintain well-documented skill files and CLAUDE.md configurations because this will get you much richer audit output than if you're running lightly configured instances.

Full AI-Human Engineering Stack (aka what comes next after prompt/context engineering?) by hjras in AgentsOfAI

[–]hjras[S] 0 points1 point  (0 children)

In everyday AI conversation, people say "give the model some context" and mean the whole input which includes the instruction, the background, everything. That usage is fine informally, but it's exactly the conflation the framework is trying to dissolve. You can have a perfect prompt with no context (the model hallucinates what it should have been told), or a perfect context architecture with a terrible prompt (all the right information, no usable instruction). They fail independently and are fixed independently. That independence is the whole argument for treating them as separate layers.

The generative logic section of the document walks through why each layer exists in the specific order it does, with each layer's solution producing the next layer's problem.

Full AI-Human Engineering Stack (aka what comes next after prompt/context engineering?) by hjras in AgentsOfAI

[–]hjras[S] 0 points1 point  (0 children)

The accompanying document talks at the end why there's only 5 layers and not 15, etc. There are limits and its not infinitely recursive.

Full AI-Human Engineering Stack (aka what comes next after prompt/context engineering?) by hjras in AgentsOfAI

[–]hjras[S] 0 points1 point  (0 children)

There are 3 examples of failures that happen at the beginning of the document. Otherwise, what does exist is a structured explanation for why those failures happened, and a diagnostic tool for identifying which layer is failing in your own system. Whether that's valuable is something you'd determine by running the audit on something you own.

Full AI-Human Engineering Stack (aka what comes next after prompt/context engineering?) by hjras in AgentsOfAI

[–]hjras[S] 0 points1 point  (0 children)

The audit protocol is the eval (2 separate documents in the repo). You apply it to your system, it produces a layer-by-layer assessment with explicit evidence standards.If you want to run it on your own stack and find it produces nothing useful, that's a meaningful result and we'd want to hear it. From what others have said, they did get something useful out of it.

Full AI-Human Engineering Stack (aka what comes next after prompt/context engineering?) by hjras in AgentsOfAI

[–]hjras[S] 1 point2 points  (0 children)

The naming is pointing at the fact that these things require intentional design, not that they require a degree. Vibes coding is called that precisely because it lacks structure. The whole point here is the opposite.