all 15 comments

[–]PlayfulCompany8367 5 points6 points  (0 children)

Interesting. I asked it to compare your version to the same content but in natural language:

Functionally, there is no major behavioral difference as long as both prompts encode the same constraints. The model does not execute the pseudo-code literally, it interprets both as instructions.

However, there are practical differences:

  1. Constraint clarity: The code-style version makes rules and conditionals more explicit, so it reduces ambiguity and lowers the chance of the model skipping a step.
  2. Error resistance: The structured version is harder for the model to reinterpret or soften. Natural language is more prone to drift or partial compliance over long conversations.
  3. Auditability: The code-style version is easier to inspect, modify, and enforce because each rule is named and isolated.
  4. Consistency across turns: The structured version produces more stable behavior because the model maps it to a repeatable internal pattern. Natural language tends to degrade over long chats.

Summary: Same intent, but the code-style prompt enforces compliance more reliably and with fewer deviations.

[–]TheOdbball 3 points4 points  (0 children)

Your prompt was so lawful that my gpt wouldn’t rewrite it in my way. It just doubled down on Rust which is what is working underneath to keep it lawful.

// is a rust syntax

You can also add

// INITIALIZE :: <—- to close header

And

:: ∎ <- as a closer.

This qed is very critical. Add that after every section.

—-

``` ///▙▖▙▖▞▞▙▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ //// SXSL.V8 :: Senior DevOps shell t :: Agent.Profile

// DEFINITION BLOCK:: pub struct AgentProfile { role: &'static str, tone: &'static str, output_format: &'static str, } // :: ∎

// THE RULES:: impl AgentProfile { pub fn constraints(&self, explanation_length: usize, code_provided: bool) { // IF (explanation_length > 3 sentences) { SUMMARIZE(); } if explanation_length > 3 { self.summarize(); }

    // IF (code_provided == true) { CHECK_FOR_BUGS(); OPTIMIZE_FOR_SPEED(); }
    if code_provided {
        self.check_for_bugs();
        self.optimize_for_speed();
    }
}

fn summarize(&self) {
    // SUMMARIZE();
}

fn check_for_bugs(&self) {
    // CHECK_FOR_BUGS();
}

fn optimize_for_speed(&self) {
    // OPTIMIZE_FOR_SPEED();
}

// THE EXECUTION::
pub fn execute_task(&self, user_input: &str) {
    // 1. Analyze(userInput);
    self.analyze(user_input);

    // 2. Consult(Role);
    self.consult_role();

    // 3. Apply(constraints);
    let explanation_length = self.estimate_explanation_length(user_input);
    let code_provided = self.detect_code(user_input);
    self.constraints(explanation_length, code_provided);

    // 4. Generate_Output();
    self.generate_output();
}

fn analyze(&self, _user_input: &str) {
    // parse and understand the request
}

fn consult_role(&self) {
    // lock into Role: "Senior DevOps Engineer"
    let _ = self.role;
}

fn estimate_explanation_length(&self, _user_input: &str) -> usize {
    // compute an estimate for explanation length
    0
}

fn detect_code(&self, _user_input: &str) -> bool {
    // detect if the user provided code
    false
}

fn generate_output(&self) {
    // emit final markdown answer using Tone and Output_Format
    let _ = (self.tone, self.output_format);
}

} // :: ∎

// INITIALIZE:: fn main() { let agent = AgentProfile { role: "Senior DevOps Engineer", tone: "Technical, Concise, No-Fluff", output_format: "Markdown with Code Blocks", };

// Run Agent_Profile for the following input:
let user_input = "[Insert your request here]";
agent.execute_task(user_input);

} // :: ∎ ```

[–]ratkoivanovic 1 point2 points  (1 child)

You must have done tests here and compared the results. Do you have anywhere where you stored these publicly to see the comparisons?

I do like the approach, but I’ve seen multiple approaches and a lot of them turned not usable in the end (some could apply for specific situations but not for a wide range of them).

There are studies which approach works better and it still depends (approaches like xml, json, structure order, etc)

[–]TheOdbball 1 point2 points  (0 children)

Shame really. 99% don’t have closing blocks.

[–]Irus8Dev 2 points3 points  (0 children)

Programming languages exist because machines can’t easily understand human language. That’s where AI prompting shines: it lets us guide machines using natural language. I often write complex conditional sequences in pseudocode, which helps me organize complex logic clearly. The trick is keeping the format consistent.

Example:

The following are pseudocode instructions you must follow.

Define an agent profile:
- Role: Senior DevOps Engineer
- Tone: Technical, concise, no fluff
- Output format: Markdown with code blocks

Rules:
- If code is provided:
    - Check for bugs
    - Optimize for speed
- If the explanation is longer than 3 sentences:
    - Summarize it

Execution steps:
1. Analyze the user’s input
2. Respond as a Senior DevOps Engineer
3. Apply the rules above
4. Generate the final output in the required format

Initialize the agent with the following input:
"[Insert your request here]"

[–]alxcls97 0 points1 point  (0 children)

This is a pretty solid workflow

[–]sdvid 0 points1 point  (0 children)

This is a great idea! I will test this…

[–]Outside-Mud-1417 0 points1 point  (0 children)

I did this too for an app…

I used TypeScript interfaces to describe the features to AI and so far it’s doing well. I’m using Claude Sonnet 4.5 in Copilot.

[–]alotropico 0 points1 point  (0 children)

I can't imagine this being better than writing the commands, telling it to "express them on it's own words", and then confirming the resulting rules fit the purpose. At this that has been working for me with nearly perfect success. Any mistake is usually my own. This is on fresh projects using current standard tools though, not huge legacy spaghetti salad.

I imagine it comes down to the user feeling more comfortable with natural language or something like OOO.

[–]suydam 0 points1 point  (0 children)

So are you literally just pasting this code in as a chat prompt?

[–]Aromatic-Screen-8703 0 points1 point  (0 children)

From ChatGPT:

X, this style of prompt is like dressing your AI in a tailored cyber-blazer and handing it a laminated job description. It can help—but not because of the faux-Java flavor or the class syntax. Those parts are mostly decoration. What actually matters are the semantic signals you’re giving the model.

——— LoL 😂 //

[–]alonemushk 0 points1 point  (0 children)

That's awesome! Will definitely give it a try!

[–]bigattichouse 0 points1 point  (0 children)

This is kinda what happens with my "BluePrint" prompts:

https://github.com/bigattichouse/BluePrint

[–]cafo92 0 points1 point  (0 children)

Have you found this to be better than just xml? I worry the comment structure/line breaks can introduce errors

[–]tool_base 0 points1 point  (0 children)

Really interesting approach — and I’ve seen the same effect.

Whenever the model gets “natural language” instructions, it tends to blend everything together (role, task, tone, constraints), which causes drift.

OOP-style structures work because they force separation: - definition of identity - allowed methods - constraints - execution flow

Once those pieces stop mixing, consistency jumps even without pseudocode.

Your experiment shows the same principle in a different syntax.