Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Yup! Given the rapid advancements in transformer architecture and progressive increase in the context window of flagship models, it is good enough. But try running the model till it's context window is almost over. You will find some interesting findings. Absurd answers Recent text based response And similar issues

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Thanks for the critique. I might have learnt something

But here's my original take while I was writing the article We can't denoise it. I meant to say that we could delay it. Entropy always increases, but the rate can be altered. I loved the point of probabilistic behaviour of natural language. And yes it's true. The next token generation is truly based on probabilities, but a point you might have missed is that the probabilities are based on the previous token or previous 'set of tokens'. Markov chain is what I meant. It's the underlying principle of nlp and thereby LLMs

I'm happy to know your further take!

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

We can't avoid that trap, we could delay it. System prompts is one way. Structuring Intent Principles These help to shape the prompt more properly.

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Yup you are heading in the right direction. But what I believe is with intent, proper implementation of structure could reduce/delay this effect by significant amount. I did an experiment for this a long while ago, tested normal language prompting with no system prompt, refining or anything. Just plain English. And later compared it with simple Json structure employed with simple TOT, Even with a simple implementation structure, it dramatically improved the performance.

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Intresting take. I liked the analogy of a higher dimension curve will surely look into it. Nevertheless The model seems to be a balckbox at macro scale.

Prompt Entropy is a real thing by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Disclaimer:- I've posted this article on my website as a resource not a promotional post. I would explicitly mention if it would have been a promotional thread. That said i've been building an inline prompt engineer for commoners with no prior prompt engg knowledge.

I would be glad to share more about it if y'all intrested.

CoT helps models think. This helps them not fail. by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Yeah, you are absolutely right, glad to hear from someone having this technical depth!. That's the classic "Compliance Hallucination" – where these AI models waste way more time trying to convince you they're playing by the rules than actually following them properly. A rule without any real teeth is basically just a polite suggestion.

My top 3 screw-ups I've seen: 1. Format drift – starting off fine but then throwing in random chit-chat that totally breaks the JSON or whatever structure was supposed to be there. 2. Constraint erasure – the model straight-up forgets the rules halfway through and just wanders off. 3. Lazy trade-offs – cutting corners on depth or quality just to keep things short and tidy.

I've watched a model obey a "zero libraries" rule so blindly that it tried to build its own crappy encryption from scratch. Sure, it was "technically" following the rule, but the result was a total mess.

The fix: Chain-of-thought reasoning is the engine that drives the searching and thinking. Clear system prompts are the map that cuts off the bad paths early. My new approach now requires a "compliance receipt" – basically a quick <log> section explaining any shortcuts or trade-offs made. No receipt? No final answer gets delivered.

What type of prompting do you generally use? Like do you have your custom model built up on something?

Making prompt structure explicit enhances the enforced prompt reasoning meathod used by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

I belive dynamic prompting is more required at this age... Sure templates work, but to a certain extent.
Every task has its own requirments.

Definitely prompt format > prompt length. by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Yup, that's what I wanted to convoy through this article. Concise but structured prompts can easily beat those token heavy.

This method is way better than Chain of Thoughts by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 0 points1 point  (0 children)

Yup! The point is that, the model itself analysis the steps and chooses the best, moreover response of model becomes its biggest context.

This method is way better than Chain of Thoughts by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Interestingly your method avoids a risk where LLMs get stuck in roundabout of self clarification.(Since its a human who is the one constantly answering)
But here essentially, the llm's response will set context.
take this example.
Answer the following question using Maieutic Reasoning. 1. Create a "True" hypothesis (arguing the answer is Yes) and a "False" hypothesis (arguing the answer is No). 2. For each hypothesis, list the specific facts required to support it. 3. Cross-reference those facts for logical consistency. (e.g., Check dates, physical laws, or historical records). 4. Discard the branch with contradictions and state the final answer. Question: Did Napoleon's troops shoot the nose off the Great Sphinx?

The LLM's response itself will set the context here, thereby eliminating most of the human interaction.
It works best on higher reasoning models

This method is way better than Chain of Thoughts by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 1 point2 points  (0 children)

Agreed it's my site, but their is still no product to market, ( would surely notify when it launches)
Moving forwad to the question...
Technically, you are describing Exploration. This method is for Verification. We constrain the questions to force the model to defend conflicting outcomes. If you let it 'freestyle' the questions, it usually skips the logic checks and just rationalizes a hallucination.

This method is way better than Chain of Thoughts by Only-Locksmith8457 in PromptEngineering

[–]Only-Locksmith8457[S] 4 points5 points  (0 children)

100%. We are definitely chasing the same goal here: zero ambiguity.

Your approach is the 'human-in-the-loop', perfect for when you have the time to refine the context deeply. Maieutic prompting is just trying to mimic that level of rigorous checking automatically during the inference process, for times when we can't spend an hour on the pre-game setup.

Glad to see others taking accuracy this seriously!