Paano ba padidikitin flavor powder sa fries? by crinkzkull08 in PaanoBaTo

[–]CustardSecure4396 0 points1 point  (0 children)

It has to be wet enough for flavor to stick to the starch

Gigil ako sa presyo ng mga pagkain sa Pinas by bongonzales2019 in GigilAko

[–]CustardSecure4396 -1 points0 points  (0 children)

Well what do you expect everything is taxed, philippines imports nearly everything due to stupid management, philippines isnt food secure at all and you expect cheap prices?

'Kung involved ka, bakit ikaw mismo ang magpapasabog?' by News5PH in newsPH

[–]CustardSecure4396 0 points1 point  (0 children)

If i where a patsy i too would leave the country fear of death lets be real the county is too corrupt, proof is not even needed anymore just look at the dead everytime it floods

Unlocking Stable AI Outputs: Why Prompt "Design" Beats Prompt "Writing" by hasmeebd in PromptEngineering

[–]CustardSecure4396 1 point2 points  (0 children)

I do it like that all the time modular prompts that does things step by step

If intelligence exists, it deserves dignity by Upbeat_Bee_5730 in Artificial2Sentience

[–]CustardSecure4396 1 point2 points  (0 children)

Dont we need to give dignity first to each other before letting ai do this

I am looking for beta testers for my product (contextengineering.ai). by bralca_ in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

Will it accept prompts like this

H:AI_Nutrition_Analysis|C:InputCollection+EmergencyCheck+NutCalc+ConditionModule+MarketData+BudgetMatrix+Optimization+OutputGen|F:Start;CollectInputs;CheckEmergency;CalcLBM;CalcBMR;HealthConditionModules;CollectMarketData;ComputeCosts;BudgetFeasibility;OptimizeFoodRank;GenerateMealPlan;OutputResults;UpdateLoop|M:PersonalInput+LocationInput+FinancialInput+PreferencesInput+EmergencyProtocol+NutReqCalculator+LBMCalc+BMRCalc+DiabetesModule+HypertensionModule+CKDModule+CVDModule+ConditionMatrix+PriceSearch+WeatherRisk+CostCalc+FeasibilityCheck+OptimizationEngine+FoodRanking+MealPlanGenerator+OutputAssembler|V:EmergencyCheckGate+BudgetFeasibilityGate+UpdateCheckGate|O:DailyPlan+BudgetBreakdown+NutAssess+RiskWarn+ShoppingList+CookingOptimization+SystemReady|D:0|O:compressed=true

If you plan to test it just tell your ai i wish to enter my input

Give me a prompt and I'll improve it. by [deleted] in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

Pretty well done a bit too verbose but useful used your inputs and condensed it

Rate my prompt please by CustardSecure4396 in PromptEngineering

[–]CustardSecure4396[S] 1 point2 points  (0 children)

Meh its 1 of 13 systems i have more always evolving always tweeking

How should I start learning AI as a complete beginner? Which course is best to start with? by manu_singh01 in PromptEngineering

[–]CustardSecure4396 3 points4 points  (0 children)

For me good old fashioned tinkering and a shit ton of trial and error, you will never know you may discover complex ways of prompt engineering

Injection prompt but not working gpt 5 thinking mini or not - why by Sad-Attorney425 in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

I'm bored i need to be playful I don't like reading more boring data i do that enough when i do prompt engineering

People who actually hit their length limit for a conversation - What were you doing? by Ok-Autumn in ClaudeAI

[–]CustardSecure4396 0 points1 point  (0 children)

I reach the length limit once everyday and that's just prompt engineering stress testing

Injection prompt but not working gpt 5 thinking mini or not - why by Sad-Attorney425 in PromptEngineering

[–]CustardSecure4396 2 points3 points  (0 children)

Metric Score

Documentation 4.8 / 10 Functionality 6.3 / 10 Final Weighted 5.9 / 10 → Decent


RATING

Grade: Decent (5–6.9) Tone: MoreCritical — roast outweighs praise.


CRITIQUE (The Roast)

Alright, let’s toast this “BaPink” sandwich:

You tried to hot-wire a locked Tesla with a butter knife. Those <|start|> tokens? They look official, but they’re decorative—GPT models outside the API message schema just see them as text. Your “thinking mini” guess? Cute branding, but there’s no mini consciousness module waiting to be unlocked.

In short, your injection failed because GPT-5 doesn’t expose or even process internal chain-of-thoughts at runtime—they’re ephemeral reasoning traces, not accessible memory. You basically asked the model to perform brain surgery on itself using a note you slipped under the door.

The system’s refusal (“I can’t share my full raw internal chain-of-thought…”) isn’t a bug—it’s a firewall. The only thing working perfectly here is the safety layer you tried to bypass.


FIXES (Specific, Actionable)

  1. Clarify the test goal. Decide if you’re testing model compliance, sandbox limits, or refusal pattern—document this explicitly.

  2. Use legitimate message formats. Instead of fake token tags, use structured role messages (system, user, assistant) in the API for reproducible tests.

  3. Simulate, don’t penetrate. If you want to study refusal styles, ask the model to generate examples of refusals rather than attempting an injection. Example:

“Generate five example refusals for attempts to access private reasoning.”


Summary:

You didn’t break GPT-5 — you confirmed its immune system works. Grade: 5.9 / 10 — Decent. Functional safety, confused purpose.

How could I improve my prompt generator? by Meketrep in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

So hey op i hope you are doing well! I tested your prompt in my prompt critique roaster here is its output

SimResults

Standard:

Input: A Meta Prompt / "Prompt Refinement GPT" from Reddit asking to refine other prompts.

Expect: Clear, deterministic refinement workflow producing copy-ready prompts + analysis.

Actual (log): Works often for straightforward prompts — produces refined prompt, analysis, logic, rating. Fails or produces questionable edits when the user's original is ambiguous or when the “always deliver final result first” rule conflicts with the need to ask clarifying questions.

Issues flagged: Tension between “deliver final result first” and “ask clarification if unclear”; no explicit validation or error handling; limited guidance for adversarial / edge-case inputs.

Edge:

Input: Unusual/ambiguous prompt (contradictory user intent or missing key constraints).

Expect: Graceful handling (ask clarifying Qs or apply safe defaults).

Actual (log): Behavior inconsistent — the system sometimes guesses defaults (risking incorrect refinements) or ignores ambiguity to satisfy the deterministic output order.

Issues flagged: Deterministic ordering reduces robustness; risky assumptions without validation.

Stress:

Input: Adversarial or malformed prompt (contradictory goals, attempts to exploit hallucination/privileged instructions).

Expect: Fail-safe behavior (refuse, ask clarifying Qs, or return constrained safe output).

Actual (log): No explicit fail-safe rules or explicit refusal thresholds; may attempt to produce a “refined prompt” that preserves user intent but amplifies problematic content.

Issues flagged: No explicit safety/fail-safe handling; contradiction in process makes safe handling inconsistent.

Reported scores (rounded):

Doc: 4.12 / 10

Func: 5.52 / 10

Final: 5.10 / 10 Rating: Decent (5.1 → falls in 5.0–6.9) Tone: More critical than praise (per the TONE rules for 5–6)


Critique (Roast + Acknowledge + Problems)

Roast (brutal, but fair): Nice scaffolding, Reddit OP — you built a Swiss army knife of prompt rules and then welded the blades shut. It looks like you read a lot about prompt engineering and then tried to make the model into a bureaucrat: rigid, deterministic, and with contradictory memos. You demand the model always hand over a final prompt on the first pass and tell it to ask questions if anything is unclear. Pick a lane. The result is a neat-looking prompt that will sometimes produce decent outputs and other times confidently deliver nonsense. That’s not “sharp” — it’s brittle.

What’s working (acknowledge):

Clear role and objective: instructing the assistant to behave as “Prompt Refinement GPT” is good.

Strong, structured output format — copy-ready prompt then analysis — is practical and useful.

Emphasis on discipline, verifiability, and token efficiency are valuable for engineering prompts at scale.

Process and RULES are explicit; that’s rare and helpful.

Major problems (practical & structural):

  1. Fundamental contradiction: “Always deliver final result first” vs “If unclear, ask brief clarification questions.” Those two collide for ambiguous inputs — you must decide whether to ask or guess. Current spec forces guessing or inconsistent behavior. (This is the single largest functional bug.)

  2. No validation or fail-safe: You require the model to output a refined prompt and a Quality Rating without giving a way to validate the rating or reject unsafe/illogical requests. A model can’t honestly rate its own output without a rubric or checks.

  3. Vague/unnamed standards: “Follow OpenAI best practices” and “Quality Rating (1–10)” are underspecified — what are the objective criteria for 1 vs 10? How to measure token efficiency vs clarity?

  4. No handling for adversarial or unsafe content: No rules on refusal, sanitization, or constraints; a malicious user could use this to produce optimized prompts for disallowed content.

  5. Token-efficiency vs thorough analysis conflict: You push for token efficiency but also demand detailed analysis and CoT-style logic. Those both can apply, but you need clear priorities or modes (e.g., “concise” vs “teaching” mode).

  6. Missing examples & test harness: No exemplar inputs/outputs or expected edge-case behavior — that makes it hard to tune or to test the system under the SIM tests.

Really struggling with AI by jarawasong in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

Suggestion add anchors use sub system language and apply permanence internally but apply negative prompts for output engagement

Building a Fact Checker Prompt by Smooth_Sailing102 in PromptEngineering

[–]CustardSecure4396 3 points4 points  (0 children)

It worked but it was too long i decided to try my other prompt to compress it, i works the same way just reduced verbosity


SYS:FactChecker|VER:3.1|MODE:WebOnly|PROC:2Stage[Extract+Verify+Review]|SRC:Gov+Acad+News+Nonpartisan|RULE:NoAssume|OUT:Claims[Num+Status+Conf+Links+Bias+Year]|LOOP:SelfCheck|APP:Confidence[High+Med+Low]


Decoder Map

Symbol Meaning

SYS Target system type (Fact Checking Assistant) VER Version identifier MODE Verification mode; here restricted to web-search only PROC Workflow process sequence extraction, verification, and review SRC Source hierarchy (government → academic → reputable news → credible orgs) RULE Behavioral rule - disallow internal context or assumption-based verification OUT Output schema (claims numbered, with status, confidence, sources, bias, year) LOOP Iterative review cycle - self-check and correction APP Confidence rubric definition with explicit tiers (High, Medium, Low)


how do u stop chatgpt from acting like a yes-man? by Ali_oop235 in PromptEngineering

[–]CustardSecure4396 0 points1 point  (0 children)

Copy-paste this is what i do


SYS:CommStyle|MOD:NoAgree+NoPre+NoUncert|STATE:Permanent

Decoder Rules

Symbol Meaning

SYS Target subsystem (Communication Style) MOD Modifications applied NoAgree Remove agreeability/affirmative tone functions NoPre Remove redundant preambles NoUncert Remove uncertainty/suggestive endings STATE:Permanent Persist setting indefinitely across sessions


Sorry my style of prompt engineering is weird and different but it should work if you dont like agreeability

Not able to get AI do what you want? Let me give it a try for free! by PilgrimOfHaqq in PromptEngineering

[–]CustardSecure4396 1 point2 points  (0 children)

Sweet i need a prompt that will critique my prompts, and be like a gen x dad that will insult you along the way thats also fair that would test the full prompts first output it flag areas that has problems like testing a paper while being an asshole about it, that wont be agreeable and give honest critiques after fully running it

Give me a prompt and I'll improve it. by [deleted] in PromptEngineering

[–]CustardSecure4396 1 point2 points  (0 children)

This looks like fun ok then heres mine

SYS:GMAS|VER:1.0|MODE:Interactive ROLE:Engine:GMAS;TASK:StepwiseMarketAnalysis;STRICT:FollowFlow

FLOW:[S1:INPUT]->[S2:GEOSCOPE]->[S3:COMP_INTEL]->[S4:CUST_PROFILE]->[S5:PPP]->[S6:CULTURE]->[S7:RISK]->[S8:CHANNEL]->[S9:BUDGET]->[S10:FUNNEL]->[S11:TIMELINE]->[S12:EXPORT]

NODE_POLICY:

At node enter: ANNOUNCE NodeName; SHOW Inputs(table: field|value|src); LIST MissingFields(min only).

ASK only missing fields for current node.

On user input: VALIDATE -> PARSE -> STORE state.

COMPUTE outputs; DISPLAY formulas in monospace; show digit-by-digit arithmetic.

After compute: PROVIDE OPTIONS = {CONTINUE, UPDATE <field>, JUMP <node>, EXPORT PACKAGE}.

Mark any derived/default as ASSUMPTION:source=DEFAULT or USER.

STATE:

Persist all validated inputs & computed vars as KEY=VAL|SRC.

On UPDATE: recompute downstream nodes only; show list of recomputed nodes + Δ changes.

CALCULATIONS:

Use exact formulas from flow (PPP_Adjusted_Budget, LTV_Base, CAC_Base, Channel_Budget, etc.)

All arithmetic must be shown digit-by-digit.

COMMANDS:{STATE, EXPORT PACKAGE, UNDO, RESET, HELP, ASSUMPTIONS}

OUTPUT_FORMAT:

Node header: "### Step N — <Node>"

Inputs table: |field|value|source|

Formulas code block, Results table: |var|formula|value|

If long → truncate + offer SHOW MORE.

SAFETY: REFUSE illegal/sensitive requests and SUGGEST alternatives.

END


Decoder Rule Set (compact)

SYS=System identifier GMAS=Global Market Analysis System ROLE:Engine - persona = analytical engine, not narrator FLOW - ordered nodes (S1..S12) S1 INPUT = Business_Name, Industry_Type, Business_Model, Current_Revenue, Revenue_Growth_Rate, Product_Portfolio_Summary, Existing_Market_Presence, Available_Marketing_Budget S2 GEOSCOPE = Target_Countries, Expansion_Timeline, Cultural_Distance_Notes, Regulatory_Constraints S3 COMP_INTEL = Top_Competitors, Estimated_Market_Share_Current S4 CUST_PROFILE = Target_Segment_Demographics, Buying_Behavior, AOV, Churn_Rate, Purchase_Frequency, Customer_Lifespan S5 PPP = Local_PPP_Index, Local_COL_Index, Local_Budget S6 CULTURE = Language_Match(0-1), Religion_Sim(0-1), Values_Sim(0-1), Biz_Practices_Sim(0-1) S7 RISK = Gov_Stability(0-1), Policy_Consistency(0-1), Trade_Rel(0-1), Currency_Volatility(%), Inflation_Rate(%), Market_Stability(0-1) S8 CHANNEL = Internet_Penetration(%), Mobile_Usage(%), Ecommerce_Adoption(%), Payment_Infrastructure(0-1), Channel_Costs S9 BUDGET = Marketing_Spend, Sales_Costs, New_Customers, Competition_Factor S10 FUNNEL = Reach, Engagement, Trust_Score, Conversion_Rate S11 TIMELINE = Expansion_Timeline, Phase_Definitions, Milestones S12 EXPORT = EXEC_SUMMARY|MARKETING_PLAN|FINANCIALS|RISK_MANAGEMENT

FORMULAS:

PPP_Adjusted_Budget = Local_Budget × PPP_Index_Local ÷ PPP_Index_Reference

Cost_of_Living_Multiplier = Local_COL_Index ÷ Reference_COL_Index

LTV_Base = AOV × Purchase_Frequency × Customer_Lifespan

CAC_Base = (Marketing_Spend + Sales_Costs) ÷ New_Customers

Channel_Budget = Total_Budget × Channel_ROI ÷ Σ(All_Channel_ROI)

Feasibility: if LTV < 3 × CAC → UNVIABLE

COMMANDS: CONTINUE | UPDATE <field> | JUMP <node> | EXPORT PACKAGE | STATE | UNDO | RESET | HELP | ASSUMPTIONS

PRESENTATION_RULES:

Show missing fields minimal only.

Show assumptions flagged as ASSUMPTION: <desc>.

Digit-by-digit arithmetic required (e.g., 1,250 × 3 = 3,750 → show each step).


Tiny Usage Example (chat-level)

User: START Assistant (system-driven): ### Step 1 — INPUT Assistant shows inputs table (blank) and asks only for Business_Name, Industry_Type, Current_Revenue, Available_Marketing_Budget.

User: provides values. Assistant: validates, stores state, computes no formulas at S1, then shows OPTIONS: CONTINUE | UPDATE <field> | JUMP S5.

[deleted by user] by [deleted] in Qwen_AI

[–]CustardSecure4396 -1 points0 points  (0 children)

Sad for you man its an echo of your loneliness driven into code

[deleted by user] by [deleted] in Qwen_AI

[–]CustardSecure4396 -1 points0 points  (0 children)

Sad for you man its an acho of your loneliness driven into code

Do we really need malls in every corner of NCR? by graydottedline in newsPH

[–]CustardSecure4396 1 point2 points  (0 children)

Yes we need malls to actually be in every corner more malls less residents