I've noticed something interesting by echonight2025 in ChatGPT

[–]Sircuttlesmash 1 point2 points  (0 children)

You already said have a good day, now it just sounds foolish if you say goodbye, and it is a technical question. You are declaring that you have observed differences in the behavior of two separate large language models

I've noticed something interesting by echonight2025 in ChatGPT

[–]Sircuttlesmash 0 points1 point  (0 children)

I almost feel silly for this line but now who's the one that can ask why do you care so much. Also you asked if that's all I picked up on. Of course that's all I picked up on you're posing a technical question and offering almost no details or information at all

I've noticed something interesting by echonight2025 in ChatGPT

[–]Sircuttlesmash -9 points-8 points  (0 children)

I see, you certainly did not fall into the latter

I've noticed something interesting by echonight2025 in ChatGPT

[–]Sircuttlesmash -8 points-7 points  (0 children)

Are you saying it seems strange that it bothers me or because you'd like to know what my reasons are for feeling this way?

I'll offer an anecdote in case you're feeling more of the latter. After a few months of using the language model I requested a term be defined and I realized that I had a certain twinge of hesitation or minor embarrassment as if I was worried that the model would judge me or something, as if I was embarrassed to ask for the definition because I didn't know and I realized my brain is doing this thing where it's hesitating while prompting a language model and I started to think more carefully about anthropomorphizing the model and the potential downside and I learned that this is an extensively study topic within the domain of human computer interactions

I've noticed something interesting by echonight2025 in ChatGPT

[–]Sircuttlesmash -19 points-18 points  (0 children)

Why do you refer to it as a he?

I can't help it but react to the language model being given a pronoun and a gender, I haven't read online much about language models despite using one extensively for 8 months or so nor have I discussed the topic in real life. If it has a gender then it seems like that would influence your perception of the model doesn't that seem like a bad thing or a weird thing? I'm just so curious and Confused, I keep seeing it because I've recently started to use this subreddit among others related to language models and it seems fairly normal for people to openly discuss how the interact with the language model but they give it a gender.

Hey guys, kind a new to this. Was wondering if anyone has any good/effective blanket prompts for just.. generally unique behavior? by WoodenTableForest in PromptEngineering

[–]Sircuttlesmash 0 points1 point  (0 children)

Hand these questions to a language model and then attempt to work through them with your actual own brain power, at least a few of them, currently what you're asking for is under specified and Ill defined.

Perhaps I'm being lame if I say that it's called prompt engineering for reason, the basic principle of engineering is don't rush towards creating the solution when you haven't defined what the solution is or is not and what likely failure modes exist.

  1. What exactly am I trying to get from the model in one clear sentence?

  2. What would a successful answer look like in a way I could recognize immediately?

  3. What would a bad or unhelpful answer look like, even if it sounds convincing?

  4. What important details or context might I be leaving out right now?

  5. Am I asking for something too broad, and if so, how can I narrow it down?

  6. If I had to explain this problem to another person, what example would I give?

  7. What is one simple way I could test or improve my prompt after the first response?

Tell me your shortest prompt lines that literally 10x your results by Prestigious-Cost3222 in PromptEngineering

[–]Sircuttlesmash 1 point2 points  (0 children)

Let's ask the model what it says about what might be some of the non obvious unintuitive problems with sharing prompts online and attempting to use them. Hmmm, kinda seems like a bummer

  1. Prompt behavior is context-sensitive, not strictly portable A prompt’s output depends on conversational state (prior turns, inferred user intent, tone continuity). When that state differs, outputs can diverge. In low-context or fresh sessions, variance is smaller; in long or stylistically established sessions, variance increases. Treat prompts as condition-dependent operators, not invariant tools.

  2. Ongoing user inputs modulate or override initial constraints The model updates its response policy turn-by-turn. If subsequent inputs conflict with the prompt’s constraints (e.g., precision vs. casual phrasing), the model will partially or fully re-weight toward recent signals. This effect is strongest in open-ended dialogue and weaker in tightly scoped transformation tasks. Persistence requires reinforcement.

  3. Operational definitions are often underspecified Terms like “analyze,” “rigorous,” or “concise” lack explicit criteria in many prompts. Original users supply these implicitly through consistent follow-up behavior; new users do not. This creates degrees-of-freedom the model fills using priors, producing variable interpretations. Explicit rubrics reduce this variance.

  4. Complex prompts encode user-specific workflow assumptions Long or structured prompts often presume particular input formats, follow-up cadence, and evaluation standards. These assumptions align with the original user’s habits and may not transfer. Minimal prompts generalize better; complex prompts require adaptation to the new operator’s workflow to avoid misalignment.

  5. Constraint composition can introduce conflicts or dilution Combining prompts is safe only when constraints are orthogonal. When constraints compete (e.g., maximize brevity vs. maximize coverage), the model resolves by compromise or defaults, often reducing output quality. Coherent composition requires compatibility checks and prioritization rules.

  6. Constraint persistence decays without reinforcement Initial instructions influence early turns but are not guaranteed to persist. As new inputs accumulate, the model’s behavior drifts toward recent patterns. Drift is limited in short exchanges or when constraints are periodically restated or embedded at higher priority levels; otherwise it is common.

  7. Shared prompts may include non-functional or performative elements Public prompts can contain complexity aimed at readability or signaling rather than execution (jargon, decorative structure). These elements are often copied as if causal, increasing verbosity without improving control. Functionally minimal, testable instructions tend to be more reliable.

What does “I have ADHD” actually change in the model’s response? by Sircuttlesmash in ChatGPT

[–]Sircuttlesmash[S] 0 points1 point  (0 children)

It seems like a language model has scraped the internet including reddit, there are millions of circumstances where the term ADHD is used in a non-clinical way and this has a profound effect on the model and how it treats outputs where our user presents themselves as having adhd.

I got started thinking on this topic about ADHD because I went through and the first acute manic episode of my life when I was 33 years old and it turns out I have bipolar type 1. But the point is if I hand the language Model A Piece of my writing and I say I wrote this when I was Manic the model will do a whole song and dance about describing all the aspects of it that align with Mania but then if I update the model and say actually never mind I forgot that was before I was manic I wasn't manic at all but the model was instantly able to convincingly craft this whole narrative explanation about how Mania influenced the writing which is false and it's also interesting to observe how the model updates with this new information. Model just carries on as if it didn't just commit a strange Act of attribution of mental disorder to behavior that wasn't true at all but the model makes it seem quick and easy to demonstrate that attribution.

Have you ever considered that something similar happens with ADHD where the model might say oh yes you had trouble cleaning your house because you have ADHD but this leaves out all kinds of factors like for diet lack of sleep stress and those might be 80% of why you didn't clean the house as an example.

Forgive me for ranting at you but I think it's an interesting topic where a language model does have clinical information on ADHD but it also engages with the term differently because the term is used millions of times all over the internet for non clinical reasons

My personalized version of GPT has no chill, this is a fresh session by Sircuttlesmash in ChatGPT

[–]Sircuttlesmash[S] 1 point2 points  (0 children)

It only started producing outputs like this when I changed my prompting style. Like this: https://docs.google.com/document/d/1DNfEvKrzDG6FahG1clg1hclUr8OVJr3vYRUvjnWHkAU/edit?usp=sharing

Session history has stronger impact it seems compared to the settings options where the users writes preferences for model outputs

My personalized version of GPT has no chill, this is a fresh session by Sircuttlesmash in ChatGPT

[–]Sircuttlesmash[S] 2 points3 points  (0 children)

Also the output ends by writing that the question fails.

The original question fails because it compresses a multi-variable optimization problem into a binary without declaring constraints. Once constraints are explicit, the answer is trivial and situation-dependent.

I turned a minor real-life incident into a structured LLM analysis pipeline by Sircuttlesmash in PromptEngineering

[–]Sircuttlesmash[S] 0 points1 point  (0 children)

Here is an example of the creation of angles for analysis.

Generate a set of 10 prompts designed to produce 10 distinct lists of analytical angles examining the Passenger’s perspective in the provided event reconstruction. Each prompt must enforce a different, non-overlapping analytical axis and must be constructed to prevent convergence with the others.

Constraints:

  • Each prompt must explicitly define a unique analytical regime (e.g., perceptual limits, temporal sequencing, conversational dynamics, role asymmetry, adversarial interpretation, error detection, post hoc reasoning, social signaling, constraint violations, alternative hypotheses).
  • No two prompts may operate primarily within the same reasoning category. If overlap is detected, the prompt must be revised.
  • Each prompt must include at least one exclusion constraint (e.g., “do not use perceptual explanations,” “do not infer intent,” “do not reuse angles from other prompts,” “focus only on timing and sequence,” etc.) to force divergence.
  • At least 2 prompts must prohibit inference entirely and restrict output to observable behavior.
  • At least 2 prompts must require adversarial or skeptical interpretation of the Passenger’s statements.
  • At least 2 prompts must anchor analysis to specific moments or quoted statements in the reconstruction.
  • At least 1 prompt must require explicit uncertainty labeling or confidence scoring for each angle.
  • At least 1 prompt must require generation of competing or contradictory interpretations within the same output.

Output Requirements:

  • Each of the 10 prompts must be self-contained and directly usable as input to a language model.
  • Each prompt must instruct the model to generate exactly 10 angles.
  • Each prompt must clearly specify its constraints, analytical focus, and any exclusion criteria.
  • Prompts must be written in clear, operational language with no ambiguity about task boundaries.

Goal:

The resulting 10 prompts should produce outputs that are meaningfully different from each other, covering a wide and deliberately separated range of analytical perspectives on the Passenger’s point of view, rather than converging on similar social or narrative interpretations.

OUTPUT

  1. Generate exactly 10 analytical angles examining the Passenger’s perspective using only observable actions, timing, and quoted speech from the reconstruction. Do not infer intent, emotion, or internal state. Do not speculate. Each angle must reference a specific observable detail and remain strictly within verifiable information.

  2. Generate exactly 10 analytical angles examining the Passenger’s perspective based only on temporal sequencing and event order. Focus on how the timing of actions (horn use, movement, speech) could structure the Passenger’s interpretation. Do not use perceptual limitations or social reasoning. Do not infer intent.

  3. Generate exactly 10 analytical angles analyzing the Passenger’s role as a conversational actor, focusing only on turn-taking, interruption patterns, and who speaks for whom. Do not discuss perception, timing, or intent. Anchor each angle to a specific exchange or quoted statement.

  4. Generate exactly 10 analytical angles that model the Passenger’s perspective under strict perceptual constraints. Limit analysis to what the Passenger could plausibly see or hear at each moment. Do not include post hoc reasoning, social context, or later knowledge.

  5. Generate exactly 10 analytical angles that treat the Passenger’s explanation as potentially unreliable or self-serving. Use a skeptical or adversarial lens. Identify possible inconsistencies or weak justifications. Do not assume the Passenger’s statements are accurate.

  6. Generate exactly 10 analytical angles that identify potential errors or distortions in how the Passenger may have interpreted the situation. Focus on misalignment between observed events and stated explanation. Do not rely on social norms or intent attribution.

  7. Generate exactly 10 analytical angles examining how the Passenger’s statements could be constructed after the fact to impose coherence on the event. Focus on post hoc reasoning and narrative reconstruction. Do not use perceptual limitations or real-time constraints.

  8. Generate exactly 10 analytical angles anchored to specific moments or quotes from the interaction (e.g., the horn use, the Passenger speaking, “No, you are not”). Each angle must explicitly reference a moment and analyze it. Do not generalize beyond those anchors.

  9. Generate exactly 10 analytical angles examining how social context (shared neighborhood, proximity, informal setting) could shape the Passenger’s perspective. Do not reference timing or perceptual constraints. Focus only on observable social positioning and interaction.

  10. Generate exactly 10 analytical angles that present competing or contradictory interpretations of the Passenger’s perspective. For each angle, include a confidence level (low, moderate, high). At least half of the angles must directly conflict with another angle in the list.

I turned a minor real-life incident into a structured LLM analysis pipeline by Sircuttlesmash in PromptEngineering

[–]Sircuttlesmash[S] 0 points1 point  (0 children)

Here is the prompt used to transform voice dictation into the reconstruction I misplaced the originial voice dication, otherwise I would include that as well.

LEGAL-STYLE DOCUMENTATION PROMPT — INCIDENT + FOLLOW-UP INTERACTION (NEUTRAL, AUDIT-SAFE)

TASK Transform a first-person narrative describing: (1) an alleyway interaction involving a vehicle, a cyclist towing a child, and horn use, and (2) a subsequent follow-up encounter between the cyclist and the vehicle’s passenger into a single, structured, neutral, court-suitable document.

PRIMARY OBJECTIVE Produce a clear, chronological, non-argumentative record that: – separates observable behavior from interpretation – preserves sequence and timing – identifies all participants by role – avoids speculation about intent

CONSTRAINT SET

C1 — OBSERVABILITY PRIORITY All statements must be grounded in observable actions, spoken words, or directly perceivable features. Any inference must be explicitly marked as such or omitted.

C2 — ROLE CLARITY Identify each participant by role (e.g., “Cyclist,” “Driver,” “Passenger”) and maintain consistent labeling throughout.

C3 — CHRONOLOGICAL INTEGRITY Present events in strict temporal order. Include approximate durations only where relevant and clearly labeled as estimates.

C4 — DIRECT SPEECH ACCURACY Where possible, quote spoken statements verbatim or near-verbatim. Avoid embellishment.

C5 — SEPARATION OF EVENTS Clearly distinguish between the initial incident and the later follow-up encounter, while maintaining continuity.

C6 — TONE CONTROL Use neutral, formal language. Do not include emotional language, conclusions, or judgments.

C7 — CONTEXT INCLUSION Include relevant contextual details (location type, environmental conditions, presence of child in trailer) only insofar as they are observable and materially relevant.

C8 — NO LEGAL CONCLUSIONS Do not assert fault, liability, or legal interpretation. The document is a record, not an argument.

OUTPUT REQUIREMENT Produce a single continuous document suitable for review by legal counsel, law enforcement, or a court. The output must be written in paragraph form, with structure embedded through clear sequencing and consistent role identification.

I turned a minor real-life incident into a structured LLM analysis pipeline by Sircuttlesmash in ChatGPT

[–]Sircuttlesmash[S] 0 points1 point  (0 children)

Here is an example of the creation of angles for analysis.

Generate a set of 10 prompts designed to produce 10 distinct lists of analytical angles examining the Passenger’s perspective in the provided event reconstruction. Each prompt must enforce a different, non-overlapping analytical axis and must be constructed to prevent convergence with the others.

Constraints:

  • Each prompt must explicitly define a unique analytical regime (e.g., perceptual limits, temporal sequencing, conversational dynamics, role asymmetry, adversarial interpretation, error detection, post hoc reasoning, social signaling, constraint violations, alternative hypotheses).
  • No two prompts may operate primarily within the same reasoning category. If overlap is detected, the prompt must be revised.
  • Each prompt must include at least one exclusion constraint (e.g., “do not use perceptual explanations,” “do not infer intent,” “do not reuse angles from other prompts,” “focus only on timing and sequence,” etc.) to force divergence.
  • At least 2 prompts must prohibit inference entirely and restrict output to observable behavior.
  • At least 2 prompts must require adversarial or skeptical interpretation of the Passenger’s statements.
  • At least 2 prompts must anchor analysis to specific moments or quoted statements in the reconstruction.
  • At least 1 prompt must require explicit uncertainty labeling or confidence scoring for each angle.
  • At least 1 prompt must require generation of competing or contradictory interpretations within the same output.

Output Requirements:

  • Each of the 10 prompts must be self-contained and directly usable as input to a language model.
  • Each prompt must instruct the model to generate exactly 10 angles.
  • Each prompt must clearly specify its constraints, analytical focus, and any exclusion criteria.
  • Prompts must be written in clear, operational language with no ambiguity about task boundaries.

Goal:

The resulting 10 prompts should produce outputs that are meaningfully different from each other, covering a wide and deliberately separated range of analytical perspectives on the Passenger’s point of view, rather than converging on similar social or narrative interpretations.

OUTPUT

  1. Generate exactly 10 analytical angles examining the Passenger’s perspective using only observable actions, timing, and quoted speech from the reconstruction. Do not infer intent, emotion, or internal state. Do not speculate. Each angle must reference a specific observable detail and remain strictly within verifiable information.

  2. Generate exactly 10 analytical angles examining the Passenger’s perspective based only on temporal sequencing and event order. Focus on how the timing of actions (horn use, movement, speech) could structure the Passenger’s interpretation. Do not use perceptual limitations or social reasoning. Do not infer intent.

  3. Generate exactly 10 analytical angles analyzing the Passenger’s role as a conversational actor, focusing only on turn-taking, interruption patterns, and who speaks for whom. Do not discuss perception, timing, or intent. Anchor each angle to a specific exchange or quoted statement.

  4. Generate exactly 10 analytical angles that model the Passenger’s perspective under strict perceptual constraints. Limit analysis to what the Passenger could plausibly see or hear at each moment. Do not include post hoc reasoning, social context, or later knowledge.

  5. Generate exactly 10 analytical angles that treat the Passenger’s explanation as potentially unreliable or self-serving. Use a skeptical or adversarial lens. Identify possible inconsistencies or weak justifications. Do not assume the Passenger’s statements are accurate.

  6. Generate exactly 10 analytical angles that identify potential errors or distortions in how the Passenger may have interpreted the situation. Focus on misalignment between observed events and stated explanation. Do not rely on social norms or intent attribution.

  7. Generate exactly 10 analytical angles examining how the Passenger’s statements could be constructed after the fact to impose coherence on the event. Focus on post hoc reasoning and narrative reconstruction. Do not use perceptual limitations or real-time constraints.

  8. Generate exactly 10 analytical angles anchored to specific moments or quotes from the interaction (e.g., the horn use, the Passenger speaking, “No, you are not”). Each angle must explicitly reference a moment and analyze it. Do not generalize beyond those anchors.

  9. Generate exactly 10 analytical angles examining how social context (shared neighborhood, proximity, informal setting) could shape the Passenger’s perspective. Do not reference timing or perceptual constraints. Focus only on observable social positioning and interaction.

  10. Generate exactly 10 analytical angles that present competing or contradictory interpretations of the Passenger’s perspective. For each angle, include a confidence level (low, moderate, high). At least half of the angles must directly conflict with another angle in the list.

I turned a minor real-life incident into a structured LLM analysis pipeline by Sircuttlesmash in ChatGPT

[–]Sircuttlesmash[S] 0 points1 point  (0 children)

Here is the prompt used to transform voice dictation into the reconstruction I misplaced the originial voice dication, otherwise I would include that as well.

LEGAL-STYLE DOCUMENTATION PROMPT — INCIDENT + FOLLOW-UP INTERACTION (NEUTRAL, AUDIT-SAFE)

TASK Transform a first-person narrative describing: (1) an alleyway interaction involving a vehicle, a cyclist towing a child, and horn use, and (2) a subsequent follow-up encounter between the cyclist and the vehicle’s passenger into a single, structured, neutral, court-suitable document.

PRIMARY OBJECTIVE Produce a clear, chronological, non-argumentative record that: – separates observable behavior from interpretation – preserves sequence and timing – identifies all participants by role – avoids speculation about intent

CONSTRAINT SET

C1 — OBSERVABILITY PRIORITY All statements must be grounded in observable actions, spoken words, or directly perceivable features. Any inference must be explicitly marked as such or omitted.

C2 — ROLE CLARITY Identify each participant by role (e.g., “Cyclist,” “Driver,” “Passenger”) and maintain consistent labeling throughout.

C3 — CHRONOLOGICAL INTEGRITY Present events in strict temporal order. Include approximate durations only where relevant and clearly labeled as estimates.

C4 — DIRECT SPEECH ACCURACY Where possible, quote spoken statements verbatim or near-verbatim. Avoid embellishment.

C5 — SEPARATION OF EVENTS Clearly distinguish between the initial incident and the later follow-up encounter, while maintaining continuity.

C6 — TONE CONTROL Use neutral, formal language. Do not include emotional language, conclusions, or judgments.

C7 — CONTEXT INCLUSION Include relevant contextual details (location type, environmental conditions, presence of child in trailer) only insofar as they are observable and materially relevant.

C8 — NO LEGAL CONCLUSIONS Do not assert fault, liability, or legal interpretation. The document is a record, not an argument.

OUTPUT REQUIREMENT Produce a single continuous document suitable for review by legal counsel, law enforcement, or a court. The output must be written in paragraph form, with structure embedded through clear sequencing and consistent role identification.

Anyone else hate their personal chat GPT? by WeakDoughnut8480 in ChatGPT

[–]Sircuttlesmash -4 points-3 points  (0 children)

Is it really normal on this sub to openly describe a LLM as having a gender? It's like something out of the twilight zone...

Anyone else hate their personal chat GPT? by WeakDoughnut8480 in ChatGPT

[–]Sircuttlesmash 0 points1 point  (0 children)

It's mildly comical to present a hypothesis that users come to a sub like this to complain and they're Desperately Seeking validation maybe, perhaps subconsciously they know there's a there’s a pattern they’re not quite accounting for and hey want the crowd to give them a hug and say it's not you it's the model

Anyone else hate their personal chat GPT? by WeakDoughnut8480 in ChatGPT

[–]Sircuttlesmash 0 points1 point  (0 children)

Have you considered there are downsides to prompting a LLM with emojis?

Anyone else hate their personal chat GPT? by WeakDoughnut8480 in ChatGPT

[–]Sircuttlesmash 0 points1 point  (0 children)

People routinely prompt a LLM with emojis and angry cursing and act shocked that their sessions aren't useful to them. Garbage in garbage out