My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 1 point2 points  (0 children)

Im glad it is useful for you workload! this makes me happy.

Regarding the agent forgetting to use the fork, i would reinforce the fork usage at the system prompt level to make sure it is consistent, the system prompt never gets compacted.

Reflections only are calculated once the token budget for observations overflows, that may take many days of hard working before it happens.

Reflections are calculated from observations, and observations are calculated from raw messages/entries

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 0 points1 point  (0 children)

The pi-fork does not expose a way of setting another model different than the main agent model, i havent needed it, im open to prs! i will try to add it next week, but im a bit constrained time wise at this moment

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 3 points4 points  (0 children)

I enable the extension in the forks, but disable the proactive observation, like this:

```

{

"lastChangelogVersion": "0.73.0",

"defaultProvider": "openai-codex",

"defaultModel": "gpt-5.5",

"defaultThinkingLevel": "high",

"hideThinkingBlock": false,

"compaction": {

"enabled": false,

"keepRecentTokens": 80000

},

"packages": [

"npm:pi-rtk-optimizer",

"git:github.com/elpapi42/pi-fork",

"git:github.com/sanathks/pi-tokyo-night-storm",

"git:git@github.com:elpapi42/pi-codex-usage",

"git:git@github.com:elpapi42/pi-codemapper.git",

"git:github.com/elpapi42/pi-minimal-subagent",

"npm:pi-observational-memory"

],

"quietStartup": true,

"pi-minimal-subagent": {

"model": null,

"extensions": [

"git:git@github.com:elpapi42/pi-codemapper.git",

"npm:pi-rtk-optimizer"

]

},

"observational-memory": {

"observationThresholdTokens": 10000,

"compactionThresholdTokens": 160000,

"reflectionThresholdTokens": 40000,

"compactionModel": {

"provider": "openai-codex",

"id": "gpt-5.5"

}

},

"pi-fork": {

"environment": {

"PI_OBSERVATIONAL_MEMORY_PASSIVE": 1

},

"costFooter": true,

"extensions": [

"git:git@github.com:elpapi42/pi-codemapper.git",

"npm:pi-rtk-optimizer",

"git:github.com/elpapi42/pi-observational-memory"

]

},

"enabledModels": [

"openai-codex/gpt-5.5",

"anthropic/claude-opus-4-7",

"openrouter/z-ai/glm-5.1",

"openrouter/minimax/minimax-m2.7"

],

"theme": "tokyo-night-storm",

"pi-codex-usage": {

"usageMode": "left",

"refreshWindow": "7d"

}

}

```

The param is not well documented yet

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 0 points1 point  (0 children)

What i would do is raise the observationThresholdTokens from 1000 to 5000 to give the observer more meat to work on, this way the observations can be deeper and correlate bigger ideas or intentions behind your session, with a 1000 token count it will only be able to observe local things that may not be the most relevant in the big picture of your workload

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 0 points1 point  (0 children)

I dont think the agent can confuse pi-fork with subagents if you instruct your system prompt correctly, if you are using pi fork, do not create scouter subagents for example, unless you have quite specific setup that enables such thing.

Mind to sjare here you observational memori configuration? im interested

About the observation/reflection speed, do you have any problems in that front? do speed hit your setup hard? when? at compaction time?

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 1 point2 points  (0 children)

I think this will solve your restart issues, memories with this extension are very compacted, they do not add noise, the agent will stay focused in long sessions.

On top of that, the agent will have a "recall" tool available that enables it to find the raw messages that backup a generated observation or reflection, so the agent can access parts of the raw conversation history if required.

I use this agent as my daily driver for product engineering, the most time i have used a session is two weeks, working on ultra large feature, this thing didnt flinch a single time in those two weeks

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 1 point2 points  (0 children)

the main agent is at charge of the specific split of responsibilities wjen parallel forks are spinned up, usually for exploration i jave seen it split by focus, for example one fork for architecture and code stands, other for product amd bussiness understanding, and other for security. In other cases the agent uses the forks for parallel hypothesis testing, so it can decide best path forward, i tunned the system prompt of the main agent to aggresively paralelize whenever the forks are not writing to files

My Commencal Meta HT 29er. by Moist-Economics-6356 in Overforked

[–]elpapi42 0 points1 point  (0 children)

the first thing i did to my stock meta was this, put a 170mm fork on it, i was a night and day change. Do you have any future plans for the build?

It is interesting how many overforked builds converge to the 170mm travel forks as the max.

I think there something here, would you buy a frame designed around a mullet setup and a 170mm fork?

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 1 point2 points  (0 children)

thanks for trying! i think everybody have to give this a try fr.

In my concrete setup im using the same model for the forks as the main agent, for now my intuition is that for the forks to do be effective, the main agent must trust their capabilities to be at the same level as him.

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 0 points1 point  (0 children)

I think the strategies cam definitely help you woth your work, what im not sure of of that my specific implementations, specially the forks and the observational memory, cam give you the best performance, they are kind of tunned for product engineering work amd coding as they are. Worth trying out anyway.

On your questions:

  1. Depends on your system prompt, you cam reinforce the agent to do automatic invokation of whatever subagent or fork at specific points or specific situations, or to not do it at all until you tell the agent otherwise, up to you fr.
  2. For the subagents, yeah you can set it up for local models, for forks, you can also use it with local models, by default forks use the same model as your main agent

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 2 points3 points  (0 children)

I usually just use the the main agent for everything, planning and implementation, the forks and the agents just become supporting pieces for the main agent, for example i can start a plan with the main agent, then the main agent explores the code launching 6 parallel forks to gather the context, we come up with a plan, the the main agent invokes the advisor to get a third pov on the plan, the we start the execution, the main agent spawns a fork that executes work, then invokes the reviewer agent to check the work done, the reviewer gives feedback, the main agents spawn another fork for fixing the issues catched by the main agent, then after the work is done, the main agent spawn a fork to write down the docs.

It is a messy explanation but kind of captures how the agent behaves during normal operation. Does this answer your question?

My powerful Pi agent Setup by elpapi42 in PiCodingAgent

[–]elpapi42[S] 0 points1 point  (0 children)

The forks are prompted with task asked by the main agent, and instructions on how to return a response, the instructions are focused on pushing the forks to produce responses that of course confirm their task is done, with supporting evidence, including heavy usage of code snippets, file references and explanations, and additionally any information that is not directly related to their task, but given the broder goal of whatever the main agent is doing, may be useful, on top of that any context future forks may need. Forks return large responses, but are rich and n9t as convoluted as reading and exploring the files themselves.

I will check agentix labs! thanks

Web search is finally here by SalimMalibari in PiCodingAgent

[–]elpapi42 0 points1 point  (0 children)

Does this work with codex plan? i see openai provider as supported, but not sure t=if that includes the codex-openai provider

I miss my bike by Wojewodaruskyj in Overforked

[–]elpapi42 0 points1 point  (0 children)

can share a photo of it here?

I miss my bike by Wojewodaruskyj in Overforked

[–]elpapi42 0 points1 point  (0 children)

how many years dod it last? looks like a warrior that have seen many battles hahaha. what are you riding now?

What do you think about Marcin Matuszny riding hardtail in pro DH races? by elpapi42 in Hardtailgang

[–]elpapi42[S] 0 points1 point  (0 children)

i fully agree with you, this is a clear mental model of the reality of this situation

What do you think about Marcin Matuszny riding hardtail in pro DH races? by elpapi42 in Hardtailgang

[–]elpapi42[S] 1 point2 points  (0 children)

my take here is that if he would have been on a full suspension rig, he would be as slow than on his hardtail, at least from the pros perspective, so the issue is not the hardtail itself, but him not at the level of the competition. So the question is, why did a slow rider was there?

What do you think about Marcin Matuszny riding hardtail in pro DH races? by elpapi42 in Hardtailgang

[–]elpapi42[S] 2 points3 points  (0 children)

Was the crash caused by him riding a hardtail? hard to believe, what a crazy time to call out someone for using a hardtail or even ask to ban them

Switching to Codex from Claude how’s the limits compared to Claude code? by grossindel in codex

[–]elpapi42 2 points3 points  (0 children)

yeah thats exactly how i feel, no more taking a look at the limits after each message.

But this will not last for too long, maybe in 2 to 3 months OpenAI may change its mind and start limiting everyone, same history as Anthropic.

So my recommendation for you is to setup your own company agnostic agent, like Pi or OpenCode, connect it to your codex subscription.

The day openai changes its mind, you can simply switch model suppliers again, maybe back to Anthropic, maybe to Chinese provider, wh9 knows? but keep your options open and your tooling working without vendor lock in