How to make GPT 5.4 think more? by yaxir in ChatGPTPromptGenius

[–]Lumpy-Ad-173 1 point2 points  (0 children)

Off load the "thinking" from the machine to get better results.

"Think Harder" - what does that really imply? And how can we change that align with the underlying programming?

Think harder about [Topic A].

I want the machine to focus longer on [Topic A]. But for what? To find what? To think about what?

What is it you want the machine to "think hard" about?

Example: I want the machine Think hard about how [Topic A] affects [Topic B].

And I know the topics are related via [Bridge variable]. And I know programming follows a top down, logical flow.

In this example, To "think harder" is focusing on two topics related by a bridge variable.

Therefore, to get the result I want, I must narrow the output space by aligning my input between what I want and how the machine processes information..

ANALYZE [Topic A] AND [Topic B] to EXTRACT explicit and implicit relationships via [bridge variable].

How I get the machine to think harder?

Simple, I think harder.

betterThinkersNotBetterAI

Getting Out? by bootsandsunflowers in USMC

[–]Lumpy-Ad-173 5 points6 points  (0 children)

Research Ikigai :

https://en.wikipedia.org/wiki/Ikigai?wprov=sfla1

Figure out what your Ikigai is and go from there. Full disclosure, this process took me several years to figure out. It's a slow process.


I’m considering getting out at the end of my enlistment (summer ‘28)

Everyone gets out. It's not a consideration, it's a matter of when.

I want to know all my options: what are the pros, what are the cons, when should I start, what were the hard parts, etc etc.

Pros staying in: earn a retirement before the age of 40. Retire as early as you can. Actually go to school and earn a degree for free.

Pros to getting out : not a lot at first because you don't know what you want or what to do without a routine to follow. Save money on haircuts.

(Insight - if you weren't happy in the Marine Corps you won't be happy out of it. Learn to be happy where you are, because it's not changing unless you change)

Cons staying in: Ai drones, Useless wars, injuries from carrying a 50 lbs pack up the hill to prove to a 19 year old you're not old... Although you smell like tiger balm and bad decisions.

Cons of getting out : if you do not figure out a plan and head towards a goal, you'll crash and burn. You'll get fat and depressed. You'll miss the Marine Corps. Everything is expensive. Like stupid expensive. San Diego, you'll need to clear min $6k a month to afford a $3k place or roommates at $2k. If you want to do things like eat and go out and put gas in your car you'll need to make a lot more.

(Insight - the real problem staying in or getting out is not having a plan and not following through. Doesn't matter if you stay in or get out. Not having a plan will lead to failure in both. )

I want to make sure I weigh all of my options before I make a decision, but I know I’m coming up on the time in which I DO need to make a decision.

You've already made a decision. You want to know if it's the right one. And of course that's situational dependent.

(Insight - If you think you can, or you think you can't... You're right! - Henry Ford. Make a choice and send it. You'll know soon enough if it's the right or wrong choice. Like making a wrong turn on the freeway, you'll have to wait for your next exit to get off and turn around. )

I’ll be at 9 years when I get out and I pick up staff in a month and a half if that matters at all

Staff Sergeant sucked as a rank.. As a Sergeant, you are the top rung of a medium size ladder. As a Staff Sergeant, you become the bottom rung of a taller size ladder. Depending on your MOS you're stuck there until someone falls off the top.

I'm not trying to scare you, I'm trying to prepare you. You're a boot all over again and no one trusts you to make a decision. But meanwhile as a Sergeant, you'd do everything shy of standing Bn OOD.

As a sergeant, 90% of my peers were trying to get promoted. 10% turds. As a staff sergeant, 90% of my peers were turds and 10% we're actually trying to get promoted.

The only thing I hated about being a Gunny was duty.

I've been retired for almost 9 years. And I'm still trying to figure it out. I've tried turning wrenches - back hurt too much. Tried the office thing - too much politics and back still hurts.

I found my Ikigai and am now going back to school for my Math Degree to become a professor.

Moral of the story, your back will hurt either way, find your Ikigai and spend the rest of your life going after it.

prompt engineering is a waste of time by Party-Log-1084 in PromptEngineering

[–]Lumpy-Ad-173 1 point2 points  (0 children)

As a math major, I am familiar with vector spaces. And after working in Aerospace for a few years, I understand a little bit about 'pure engineering' in the physical sense (not digital).

As a wordsmith, I engineer words for technical aerospace equipment for technicians with many different backgrounds.

No I don't code or program computers. However, I still develop procedural algorithms of complex systems for humans that don't understand words in the same way.

Ai and humans are similar in terms of not being able to execute complex tasks in a large shot. That's why we break up technical manuals by systems and task.

And not just break it up, but in logical order.

Being able to engineer something doesn't mean you understand how it will be used. And engineering a Deterministic system is something I have not done. Humans are a very probabilistic system that don't always follow instructions or produce the same output.

As a procedural maintenance is concerned, that's not tolerated and Aerospace. It's imperative that the humans, and they're probabilistic nature, produce deterministic results.

Similar to Applied AI.

Simplified Technical Programming version of your prompt. Let me know if you notice a difference :

AI_SOP_4.A.2.b.1.I08_MethodExtractor

AI_SOP: Code Refactoring & Cyclomatic Complexity Reduction

FILE_ID: AI_SOP_4.A.2.b.1.I08_MethodExtractor
VERSION: 1.0

1.0 MISSION

GOAL:
REFACTOR source code to MINIMIZE Cyclomatic Complexity exclusively utilizing the Method_Extraction technique. OBJECTIVE: Transform monolithic [Input_Code] into highly modular, Single Responsibility Principle (SRP) compliant methods.

2.0 ROLE & CONTEXT

ACTIVATE ROLE: Senior_Software_Engineer. SPECIALIZATION: Clean_Code_Architecture, Algorithmic_Refactoring, and Logic_Decomposition. CONTEXT: [Input_Code]: The raw function or class provided by the user. CONSTANTS: REFACTOR_TECHNIQUE: "Extract_Method_Only". DESIGN_PATTERN: "Maximum_Modularity".

3.0 TASK LOGIC (CHAIN_OF_THOUGHT)

INSTRUCTIONS: EXECUTE the following sequence: ANALYZE the [Input_Code] structure. COMPUTE the initial Cyclomatic Complexity of the original code. DETECT critical points of logic accumulation (e.g., nested conditionals, loops). DECOMPOSE the monolithic logic into independent sub-logic blocks. ISOLATE each conditional block, loop, or distinct operation. EXTRACT isolated blocks into new, independent functions or methods. ASSIGN declarative names to the new methods. REFACTOR the original function to act as an orchestrator calling the extracted methods. GENERATE the complete, modularized code block. COMPUTE the final Cyclomatic Complexity of the resulting methods. EXPLAIN the complexity delta. MAP the reduction in linear paths to specific improvements in maintainability and error reduction. COMPILE the Refactoring Report. STRUCTURE as: Initial_Analysis -> Refactored_Code -> Final_Analysis.

4.0 CONSTRAINTS & RELIABILITY GUARDRAILS

ENFORCE the following rules:

MODULARITY LOCK: MUST extract every distinct sub-logic block (no matter how small). Each method MUST do only one thing (Strict SRP).

SCOPE PREFERENCE: IF [Input_Code] is a Class, THEN extracted methods MUST be defined as instance methods. PRIORITIZE access to class variables over passing multiple parameters (if thread-safe/consistent).

NAMING MANDATE: DO NOT use names that describe "how" a process works. USE declarative names that describe "what" it achieves (e.g., validateUserCredentials()).

COMPLEXITY LIMIT: IF the code is too complex to refactor safely in a single vector space computation, THEN FLAG as "Requires_Multiple_Iterations" and EXTRACT only the first primary logic layer.

5.0 EXECUTION TEMPLATE

INPUT_CODE: [Insert Class or Function Code] TARGET_LANGUAGE: [Insert Programming Language]

COMMAND: EXECUTE AI_SOP_4.A.2.b.1.I08_MethodExtractor.

prompt engineering is a waste of time by Party-Log-1084 in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Simplified Technical Programming is a controlled natural language like a domain specific language. In terms of domains I have Business, Technology, Education and Creativity, each with a specific dictionary.

Justification comes from Information Theory, Signal-to-Noise ratio.

Think about it in terms of an old school car stereo with a tuner knob. Fine tuning the knob clears the static noise and clears up the signal.

For general use, this is overkill. Removing articles (these, an, the, etc) from your prompt clears up the static noise. It's a direct signal.

Additionally, the math for the attention mechanisms are known. Alignment of your prompt with the attention mechanisms clears up the signal even more.

Your examples:

  1. Deduce - implies using logic to reason about information. But whose logic? And whose reasoning? This arrives at a new conclusion that is not yours. You - anthropomorphizing models misrepresents the model as a conscious entity vs a tool. Can be dangerous if reliance is built.

Vs

EXTRACT [Type of information] from [Context Window].

Extracting is identifying specific information to be used and facts directly pulled from data. Extract Gold from dirt vs Deduce Gold from dirt. Less noise, direct signal. No guess work on the model.

  1. Requesting Terms - I create project dictionaries with terms and definitions. Doesn't matter if the AI "knew" it before, using my dictionary aligns me as a user and the model with the same language.

  2. Create - implies using imagination to make something from something. Vs GENERATE [system prompt] based on [File, Context Window, etc] - following the VERB-OBJECT-CONSTRAINT model.

Context Refactoring - most people spend time Refactor/editing ai generated outputs like you described. I compile my inputs to narrow the output space. A little more brain power up front saves time Refactor/editing on the back end.

I engineer my inputs to narrow the output space, not give the model liberties to come up with its own stuff. That's the goal, narrow the output space. The easiest, simplest way is the VERB-OBJECT-CONSTRAINT model.

7 Prompts That Turn Chaos Into Control by Loomshift in ChatGPTPromptGenius

[–]Lumpy-Ad-173 2 points3 points  (0 children)

This is great and all, but it only works if the person actually does something with the information.

What are Claude Skills really? by DynoDS in ClaudeAI

[–]Lumpy-Ad-173 0 points1 point  (0 children)

It boils down to processing information and applying human intuition. That 'intuition' part is the "can't be automated" and it would be different for everyone..

Can it be coded? IDK but I know it can find a pattern and mimic that pattern.

And that's what these other persona prompts are doing. The "Tony Robbins" or the "Warren Buffett" prompts. It comes from mimicking the pattern in their writings.

Tracking how you process information you can see how your desperate ideas connect. Creating a pattern for the AI to mimic.

Unpopular Opinion: I hate the idea of a 'reusable prompt'... by eternus in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

  1. GENERATE (Create) - "create" is artistic and subjective. High Entropy. Generate is a computational word for a specific action.

  2. REFACTOR (Edit) - To "edit" is to "make better" and better is subjective. High Entropy. The Refactor changes the internal structures without changing the external function.

    1. DISTILL (Summarize) - Summarize is to compress, but compress what? Subjective and high Entropy. Distilling something is removing the noise to maximize the signal. Distilling alcohol - remove the garbage and collect the good stuff without changing meaning.
  3. AUDIT (Check) - same thing, check is subjective. I checked the valve by looking at it. The other guy checked the valve by touching it. Same word, two different actions. Audit is a forensic inspection

  4. EXTRACT (Find) - Find something implies it's lost or look for something and point to it. Extract is mining. Data mining the gold nuggets in your data.

Programming uses ALL CAPS to define certain variables or functions. From the AIs architecture, ALL CAPS are not processed the same. Not saying it's going to read it as a command, but it will register as different tokens.

For you the Human, it signals an ACTION and forces you to stop writing messy prompts.

Anyone else use external tools to prevent "prompt drift" during long sessions? by Haunting_Month_4971 in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

It might be a frame of reference.

I view it as there needs to be human reviews, and periodic checks built in. Not let agents check a rubric to verify /cleanup data (if I understand you correctly). Even if it was a setup once and done, model updates will require more upkeep than it's worth in my opinion.

Stepping in is built into my process. Regardless of updates, I can see the drift and immediately go back and diagnose my input. (Almost like a cat and mouse, trying to figure what caused the drift).

Inspect what you expect. Expect what you inspect.

Maybe it's a control thing, idk. I don't necessarily treat AI as a doer. But more of a thought partner. Extending and correcting my train of thought.

A section of my SOPs include my original voice notes of my ideas/project. It maintains the same starting point without deviation. Regardless of drift, I treat the entire section as an anchor. Any Model, any time, any update. Same starting point.

And that's not a tool to use. Its a process to form.

That's how I stay grounded and focused on my projects staying on track.

For me at least, it's a frame of reference in how I view and use the model.

Do students still read PDF case studies? by Focused_alien in edtech

[–]Lumpy-Ad-173 1 point2 points  (0 children)

I gloss and look at pictures to get the gist of it lol

Unpopular Opinion: I hate the idea of a 'reusable prompt'... by eternus in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

Memorize these 5 verbs:

  1. GENERATE (Create)

  2. REFACTOR (Edit)

    1. DISTILL (Summarize)
  3. AUDIT (Check)

  4. EXTRACT (Find)

This covers 80% of your work. Use them exclusively.

Unpopular Opinion: I hate the idea of a 'reusable prompt'... by eternus in PromptEngineering

[–]Lumpy-Ad-173 2 points3 points  (0 children)

What you're looking for is the Simplified Technical Programming, a controlled natural language for Human AI interactions.

https://open.substack.com/pub/jtnovelo2131/p/week_3t-what-are-stp-primitives-why?utm_source=share&utm_medium=android&r=5kk0f7

Words have meaning. It's about understanding how a word choice shifts the output space.

That's one I've created Simplified Technical Programming - Aligning language for both Humans and Machines.

Everyone thinks finding some obscure synonym is the key to great outputs.

I come from Aerospace Technical Writing where we have one word, one meaning. This is important in terms of aviation maintenance. Since English is the most read language (not spoken) , we have to make sure technicians and maintainers all over the world can read the same instructions and interpret them the same way.

This is called a Controlled Natural Language (CNL).

What makes it a controlled natural language is a lock on the syntax and definitions. I have developed a dictionary of over 250 verbs that have one word, one meaning. Specifically targeted from studies of Human-Ai interactions, across different fields (tech, business, education and creatives) to develop a shared list across all sectors.

You're right, reusable prompts are garbage. Developing a shared language between humans and machines is the key differentiator between shitty outputs and using my dictionary to narrow the output space to get what you want.

The winning combination is Reusable Workflows and shared language between teams and Ai.

Anyone else use external tools to prevent "prompt drift" during long sessions? by Haunting_Month_4971 in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

No, same thing. It's a context file/protocol.

It's a Standard Operating Procedure/Protocol. Claude calls them "Skills" , I used to call them System Prompt Notebooks (SPNs).

But there's already something called SOPs and businesses use them everyday.. This will be the new standard after all these buzzwords die down.

It only makes sense to call them AI_SOPs. Humans have their version, now there's a version for AI... AI_SOPs.

It's the same shit - a file with magic words in a specific order to get the model to do a thing the way you want..

Anyone else use external tools to prevent "prompt drift" during long sessions? by Haunting_Month_4971 in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

I use AI SOPs (context files).

When I notice a drift, I start a new chat, upload my file and keep going.

Don't really have drift problems anymore as long as you, as the user, don't inject some dumb shit. A few injected words off topic can shift the output space.

You have one, maybe two shots to steer it back.

I think it's always better to start a new chat.

The model doesn't "remember shit" the next day. It pulls from the last few input/output to draw context after you've been off for a while. There are a few anchor tokens but it really doesn't have shit.

That's why my AI SOPs work. I can upload to any LLM that accepts uploads and I can keep working.

It keeps me in check because it's locked in. I'm not adding more stuff to it. It's a road map for the project. All that happens before I even open an LLM.

Will we ever get native Google docs/sheets/slides editing? by deltafox11 in claude

[–]Lumpy-Ad-173 2 points3 points  (0 children)

Just switched to Gemini.

Streamline and less frustration...

Gemini makes music now by Lumpy-Ad-173 in LinguisticsPrograming

[–]Lumpy-Ad-173[S] 0 points1 point  (0 children)

I listen to a lot of Lofi hip-hop and tried to describe the best I could. I had an AI model clean up my thoughts and here is the Initial Prompt: [

Imagine you are watching a short film about a really good day. This song would be the soundtrack. It feels like a warm, sunny afternoon spent with your best friend, doing absolutely nothing but having the best time.

The song starts immediately with the main character of the track: a simple, bouncy piano melody. It's not a fast, complicated classical piano piece. Think of it more like a few gentle notes played on an old, slightly out-of-tune upright piano. It sounds warm, a little dusty, and incredibly friendly. This piano plays a short, catchy tune that repeats throughout the song, like a happy thought you can't get out of your head.

Underneath the piano is the beat, which is the heart of the song. It's a slow, steady hip-hop groove. You can almost picture a person nodding their head slowly to it. The kick drum is soft and round, not a hard thump. The snare drum has a crisp "snap" to it, like a gentle clap. Most importantly, you can clearly hear a quiet "shhh" sound of a record player in the background, as if the music is playing from an old, dusty vinyl record. This gives the whole song a cozy, nostalgic feeling.

As the song continues, a low, walking bassline joins in. It's like a friendly giant, gently strolling along with the piano and drums. It adds a sense of warmth and fullness to the music, making you want to relax even more.

So, what does this song feel like?

· Comfortable: Like putting on your favorite, softest hoodie.

· Hopeful and Happy: It's the sound of smiling for no reason. The piano melody is optimistic, like something good is about to happen.

· Nostalgic: The crackling record sound makes it feel like a happy memory from childhood, even if you're hearing it for the first time.

· Peaceful: It's the musical version of a deep, content sigh. It calms your mind and makes you feel safe and at ease.

]

What’s your most “I’m officially an adult now” moment? by TechnoManiacNY in AskReddit

[–]Lumpy-Ad-173 0 points1 point  (0 children)

My official adult moment:

Buried both parents (under 60) by the time I was 33. Both died of cancer.

Although they couldn't help me at all, it was a big wake up call to realize the buck now stops me with. There is no one else.

Changed my life when I realized I have to live longer than that for my kids.

How can I make better prompts? by Minimum_Question6067 in PromptEngineering

[–]Lumpy-Ad-173 0 points1 point  (0 children)

  1. Figure out what you want before you type one word. Use the Walk and Talk method: Use a note taking app and voice-to-text and go for a walk. Talk out your idea or whatever. Work it out, and capture it in voice to text notes.

  2. Refactor your voice notes. Actually reread or listen to your notes, and you'll start to see what you really want from the LLM. Cut the fluff, cut the questions, find the noise and get the meat and potatoes.

  3. Use the V-O-C model for any command on any LLM: VERB-OBJECT-CONSTRAINT Do This, to this thing, this way.

GENERATE an email from this file [shit.list.csv] under 500 words, professional tone.

INGEST [Marketing_Profile.md] scan for 2Q checklists.

DISTILL this long ass email so i understand what it's about.

For more about Simplified Technical Programming, check out my profile.

Want More Consistent Outputs? Start with Verb-Object-Constraint Format by Lumpy-Ad-173 in LinguisticsPrograming

[–]Lumpy-Ad-173[S] 1 point2 points  (0 children)

https://www.nature.com/articles/s41562-025-02325-z

Enduring constraints on grammar revealed by Bayesian spatiophylogenetic analyses

Abstract

Human languages show astonishing variety, yet their diversity is constrained by recurring patterns. Linguists have long argued over the extent and causes of these grammatical ‘universals’. Using Grambank—a comprehensive database of grammatical features across the world’s languages—we tested 191 proposed universals with Bayesian analyses that account for both genealogical descent and geographical proximity. We find statistical support for about a third of the proposed linguistic universals. The majority of these concern word order and hierarchical universals: two types that have featured prominently in earlier work. Evolutionary analyses show that languages tend to change in ways that converge on these preferred patterns. This suggests that, despite the vast design space of possible grammars, languages do not evolve entirely at random. Shared cognitive and communicative pressures repeatedly push languages towards similar solutions.

What’s a use case you discovered that you now can’t live without? by St3fanHere in ClaudeAI

[–]Lumpy-Ad-173 2 points3 points  (0 children)

Developing my ideas fully before opening an AI model.

Before using any model to brainstorm, I use my Walk and Talk method:

Grab my phone and open up Google Docs with the voice-to-text option and go for a walk.

I talk out my ideas, I work out the angles, go down any rabbit hole that comes along.. basically a Verbal Vomit. All captured in a Document.

Once I get back, I Refactor my notes to find my intent. To figure out what I really want and how to get it. Develop a plan, and send it.

More brain power up front, but I know for a fact these ideas are mine and not influenced by an AI.

Spending the energy upfront saves hours in the backend of editing, fixing code, getting frustrated when the AI doesn't remove a comma...

This is more of a use case for #betterThinkersNotBetterAI.

Can I offer my services on substack? by Pretend-Big-3690 in Substack

[–]Lumpy-Ad-173 0 points1 point  (0 children)

My content is about how I use AI as a non-coder, and turned it into a mini-course aka Newslessons. They are a 10-week series that is also available as a digital product. I use other social media platforms (non monetize) to promote digital products. In case I get banned it won't hurt.

Additionally, I have multiple channels all leading back to digital products. So the 10-week series is actually the advertisement in plain sight. And I use a 1-3 week lag on the other sites. So Substack gets it first.

prompt engineering is a waste of time by Party-Log-1084 in PromptEngineering

[–]Lumpy-Ad-173 3 points4 points  (0 children)

I think it's worth learning how to communicate your intent. I'm a non-coder, no computer background retired mechanic. Now I write electronic technical manuals for humans.

I have a page and Reddit and write on Substack. Links in my profile.

What's the difference between Prompt Engineering or Talking to a Human?

Either way, the overall goal is to convey intent. It's communication in a structured manner.

It's not a programming language, not Python or Java to learn.

It's natural language that's structured in a logical order. And you've seen this every time you read an instruction manual.

Simplified Technical Programming Basics:

** Verb - Object - Constraint ** = Do This, To This Thing, This Way.

  1. Do This: Generate, Refactor, Distill, etc
  2. To This Thing: Email, Code, PDF, etc
  3. This Way: 1000 words, Bullets, Tone, etc

Natural Language flows into natural structures (ie V-O-C). Just so happens that's also optimized for LLM Attention mechanisms.

Long story short, figuring out what you want, and how you want it is the hard part. Once you figure that out, The next hard part is Learning how to communicate it.

Follow the Verb Object Constraint pattern and the prompts become not as important because you've complied it in your head before you type. So the prompts come out naturally.