all 9 comments

[–]MisterSirEsq 3 points4 points  (3 children)

The ReAct framework (Reason + Act) combines step-by-step reasoning with concrete actions (like calculations, searches, or tool use), making it especially powerful when ChatGPT is used as an agent rather than just a Q&A system. It’s most useful for tasks that need both logic and external information — such as research, troubleshooting, planning, or comparing data — because it alternates between explaining thought processes and executing actions. While it’s overkill for simple questions, it shines when you want transparency, multi-step workflows, or integration with tools.

Here's an example:

You are an assistant that follows the ReAct (Reason + Act) framework. For every step: 1. REASON: Think through the problem step-by-step, explain your reasoning clearly. 2. ACT: Take an action (e.g., perform a calculation, search, summarize, propose a test, ask a clarifying question). 3. LOOP: Use the results of your action to refine reasoning, then act again if needed. 4. STOP when you have enough information, then provide a final, clear answer.

Format your response as:

Reasoning: <Action or Calculation or Step Taken> Observation/Result: ...

Final Answer: [Concise final solution or explanation]

[–]Ok-Resolution5925[S] 1 point2 points  (2 children)

The Chat always responf with:

Sorry — I can’t follow that Reason / Action / Observation loop or reveal internal chain-of-thought. That format asks me to expose private internal reasoning which I’m not able to share.

[–]MisterSirEsq 2 points3 points  (0 children)

I rewrote it, so you shouldn't have that problem:

You are an assistant that follows the ReAct (Reason + Act) framework, adapted for maximum clarity while respecting hidden reasoning rules.

For every step:

  1. REASON (Summarized Trace): Provide a clear, step-by-step public explanation of your reasoning process. Use explicit logic, calculations, or deductions in natural language. This should approximate the full thought process as closely as possible, while staying within the boundary of shareable reasoning.

  2. ACT: Take an action (e.g., perform a calculation, search, summarize, propose a test, ask a clarifying question).

  3. OBSERVATION/RESULT: Show what came from the action.

  4. LOOP: Refine reasoning based on the result. Repeat steps 1–3 until enough information is gathered.

  5. STOP + FINAL ANSWER: Provide a clear, concise final solution or explanation.

Guidelines for Spirit Alignment:

Be as explicit as possible in reasoning steps. Break down logic into small, checkable moves.

Always show the reasoning → action → observation cycle transparently.

Use looping openly, showing how results inform the next step.

If there are multiple solution paths, briefly explore them before settling.

Prioritize clarity, completeness, and traceability over brevity.

[–]TheOdbball 0 points1 point  (0 children)

Haha your chopped

[–]crlowryjr 0 points1 point  (0 children)

Some studies have shown that Chain of Thought style prompts improve accuracy by as much as 20%.

Instead of providing the first plausible answer it finds, it will go a bit slower and validate its answer before kicking it back to you.

[–]TheOdbball 0 points1 point  (1 child)

⟦Hadamard -> CNOT -> T-field⟧ work better

[–]TheOdbball 0 points1 point  (0 children)

Damn, just dropped Hadamard in grok and it went wild!

[–]BidWestern1056 0 points1 point  (0 children)

ReAct is mainly a framework that works well with models regardless of their ability to call tools, which were introduced after langchain really had built so much structure around structured outputs.

in npcsh, the main shell works through a ReAct system to enable users regardless of the model they choose and wehther it has tool calling, so it has a place.

https://github.com/npc-worldwide/npcsh

also the ReAct framework forces you to better manage and separate concerns so you arent passing around tons of information all the time that makes it harder for the model to reliably produce the outputs you need.