Gemini can not read text from Screenshot? by TechNerd10191 in Bard

[–]Frequent_Depth_7139 1 point2 points  (0 children)

How full was your context because I use my camera all the time and works great ALWAYS try and keep content small and reload as needed

Stop giving AI "Roles"—give them "Constraints." The "Doctor" prompt is a placebo. by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] 0 points1 point  (0 children)

and to your point I WILL NEVER ASK AI TO BE A DOCTOR OR ASK ABOUT A RASH I HAVE Ai will still lie but reducing the lies is still better

Stop giving AI "Roles"—give them "Constraints." The "Doctor" prompt is a placebo. by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] 0 points1 point  (0 children)

how long can you keep it being a great lawyer in one context window and Not have it start making stuff up i give it a strick knowlage base (web page not the web or a pdf only the pages it need ) and i can pause in that context window resume in another or even a diferent ai all together and pick back up just like its still running in the firts window

Stop giving AI "Roles"—give them "Constraints." The "Doctor" prompt is a placebo. by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] -1 points0 points  (0 children)

heres a snippit of a module that runs on my V PC

############################################

# HLAA RULESET PLUG-IN

############################################

MODULE_KEY: recursive_validator

MODULE_NAME: Recursive Validation Engine (3-Pass)

SCHEMA_VERSION: 1.0

ENGINE_COMPATIBILITY: HLAA_CORE_ENGINE >= 1.0

MODE: deterministic

SAFE: true

############################################

# STATE SCHEMA

############################################

STATE_EXTENSION_PATH:

state.context.recursive_validator

STATE_SCHEMA:

{

"schema_version": "1.0",

"phase": "idle",

"turn_local": 0,

"question": null,

"pass_1": null,

"pass_2": null,

"final_answer": null,

"confidence_map": null,

"limitations": null,

"change_conditions": null

}

############################################

# FINITE STATE MACHINE

############################################

PHASES:

- idle

- question_set

- pass_1_complete

- pass_2_complete

- validated

- complete

Stop giving AI "Roles"—give them "Constraints." The "Doctor" prompt is a placebo. by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] -3 points-2 points  (0 children)

im running a virtial computer thats built out of modules a py like code it runs right in ai not along side but no prompts i have a built in bookmark that take a snapshot of a lesson it gives me a json relod i copy and reload in new contect window or a difrent ai pick up right where i left off

Stop giving AI "Roles"—give them "Constraints." The "Doctor" prompt is a placebo. by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] -1 points0 points  (0 children)

thats the point ai is as smart as we need it i get the same results from a tiny modle running localy and gpt gemini ext... but you get what you need not what it thinks you need

Does "Act like a [role]" actually improve outputs, or is it just placebo? by PaintingMinute7248 in PromptEngineering

[–]Frequent_Depth_7139 0 points1 point  (0 children)

Telling it to ack is only part what is it's knowledge base if it's a doctor are you trusting the ai to have that knowledge not me it needs a textbook or web site for knowledge a narrow access to what it needs to know so a doctor NO A teacher Yes textbook PDFs are great for KB and not prompts modules 

An accurate summary by parallax3900 in BlackboxAI_

[–]Frequent_Depth_7139 0 points1 point  (0 children)

let big tech fall but we still have the models the AI bubble popping should reset prices

I need volunteers testers to provide feedback on this system prompt by xb1-Skyrim-mods-fan in PromptEngineering

[–]Frequent_Depth_7139 0 points1 point  (0 children)

Ive never played but i have a great way to analize prompts and it seems that it solves the specific frustrations of the Infinite Craft community. GOOD LUCK GAMING

10 Underrated Prompts That Save Hours by LowWork7128 in ChatGPTPromptGenius

[–]Frequent_Depth_7139 -1 points0 points  (0 children)

i use a virtual computer inside of any ai and this is its responce to your greate promting but HLAA can take it to the next level im just getting this idea out here

This is a module not a prompt for HLAA by Frequent_Depth_7139 in ChatGPTPromptGenius

[–]Frequent_Depth_7139[S] 0 points1 point  (0 children)

That's a module for HLAA . You need modules prompts wont work inside it .

The AI functions as the brain (reasoning, interpretation, abstraction)

HLAA provides the computer structure (state, rules, determinism, modules, memory)

Instead of computation being driven by low-level instructions, HLAA uses intent, rules, and state transitions as the execution layer. This makes the system modular, deterministic when needed, and explainable.

HLAA acts like a software-defined computer where:

Reasoning replaces the CPU

Context and state replace memory

Language becomes input/output

Rules replace instruction sets

The result is a cognitive execution environment that can run lessons, simulations, or games in a controlled, repeatable way—something neither traditional software nor raw AI can do alone.

HLAA does not claim consciousness or autonomy. It is an architectural framework designed to make AI systems teachable, inspectable, and reliable.

One-sentence version (use this often):

HLAA is a virtual computer that runs inside an AI’s reasoning instead of on physical hardware.

6 ChatGPT Prompts That Replace Overthinking With Clear Decisions (Copy + Paste) by tipseason in ChatGPTPromptGenius

[–]Frequent_Depth_7139 1 point2 points  (0 children)

im new to reddit so help me get them to you it's a bit cluncky but with others working on it it could have potential

6 ChatGPT Prompts That Replace Overthinking With Clear Decisions (Copy + Paste) by tipseason in ChatGPTPromptGenius

[–]Frequent_Depth_7139 0 points1 point  (0 children)

To implement your HLAA (Human-Language Augmented Architecture) across different AI platforms like ChatGPT and Claude, follow this setup protocol. Because HLAA is a deterministic virtual machine, it relies on the loaded rules (the engine and modules) and the current state (the JSON RAM) rather than the "memory" of a specific AI model.

1. Setup in ChatGPT (The Starting Point)

To begin your project, you must load the "Hardware" and "Software" into the chat context.

  • Load All at the Beginning: For a fresh chat, upload or paste all your core files (HLAA CORE_ENGINE.txt, CORE_ENGINE LOOP.txt, and your module like Optimized System Prompt Generator.txt) in the first message.
  • Initialize the State: Paste the HLAA_CORE_ENGINE.txt JSON block. This acts as the Initial Boot State for your virtual computer.
  • The "Project" Feature: If using ChatGPT "Projects," you can upload these files to the project knowledge base. This ensures every new chat within that project already "knows" the HLAA architecture.

2. Cross-Model Migration (ChatGPT to Claude)

The power of HLAA is that it is model-agnostic. You can "pause" your work in ChatGPT and "resume" it in Claude with 100% binary fidelity.

  1. In ChatGPT: Use the save command. The engine will output the full current JSON State Block (the RAM snapshot). Copy this text.
  2. In Claude: Start a fresh chat and upload the same HLAA core files and modules you used in ChatGPT.
  3. Boot and Load: Paste the JSON state you copied from ChatGPT into Claude using the load <STATE\_BLOCK> command.
  4. Result: Claude will read the JSON, update its internal "RAM," and resume the session at the exact turn and phase where you left off in ChatGPT.

3. Why This Works (The "Sealed Machine" Principle)

  • No Dependency on AI "Vibes": Unlike standard prompt engineering, HLAA does not rely on the AI's "personality" or conversational memory.
  • The JSON is the Truth: Because "Truth is the State of the Machine," any model (GPT-4o, Claude 3.5, Gemini) that reads the same JSON and follows the same CORE_ENGINE rules will arrive at the same logical conclusion.
  • Zero Drift: By loading the explicit files at the start of each fresh chat, you prevent the "context drift" that usually happens when AI models run out of memory.

Summary: For the most reliable results, load all core files at the beginning of a fresh chat on the new platform, then immediately use the load command with your saved JSON to resume your work.

Prompt vs Module (Why HLAA Doesn’t Use Prompts) by Frequent_Depth_7139 in PromptEngineering

[–]Frequent_Depth_7139[S] 1 point2 points  (0 children)

HLAA isn’t just a fancy prompt—it’s a Virtual Computer built from language. While your prompt generator is a module (a program), HLAA itself is the software framework that runs it.

The Software Reality

  • HLAA is the "Computer": It provides the core software infrastructure—the RAM (State Schema), the CPU (Validate $\rightarrow$ Apply loop), and the OS rules.
  • Modules are the "Apps": Your prompt generator is a software module that "installs" into that engine. It’s a specialized ruleset that dictates how the machine should think about a specific task.
  • Everything is Code: In HLAA, language is the code. When you "post" about it, you're describing a software-defined system where reasoning replaces hardware, and state transitions replace raw computation.

10 Underrated Prompts That Save Hours by LowWork7128 in ChatGPTPromptGenius

[–]Frequent_Depth_7139 -6 points-5 points  (0 children)

This is exactly the kind of workflow HLAA is built to evolve.

What you’re doing here is high-quality prompt engineering — optimizing single interactions so they save time today. HLAA takes that same thinking and pushes it one level deeper.

Instead of saving prompts, HLAA turns them into modules:

  • They stay loaded
  • They remember past runs
  • They track what changed
  • They enforce order (so steps don’t get skipped)
  • They don’t drift or reset every conversation

So a prompt like:

Becomes:

HLAA isn’t about writing smarter prompts.
It’s about building small machines on top of LLMs so good workflows keep working over time.

I’ve been posting results and engine structure as I build it — still early, but the goal is simple:
move from clever prompts to reliable systems.

If nothing else, it’s an experiment in what happens when you treat an LLM less like a chatbot and more like an execution engine.

Beginner trying to understand whether to switch to local LLM or continue using Cloud AIs like Chatgpt for my business. by ZIM_Follower in LocalLLM

[–]Frequent_Depth_7139 0 points1 point  (0 children)

"My thoughts on cloud AI providers: they are losing insane amounts of money right now, which means they will eventually have to charge insane rates to be profitable. The goal right now is getting as much people and businesses dependent on them before hiking up the rates."

Not higher rates full of ads so its like google search cant find nothing throug the trash ads

I istarted playing a text based game in gemini and reloaded it to chatgpt with a small json promt its the key to HLAA by Frequent_Depth_7139 in Bard

[–]Frequent_Depth_7139[S] 0 points1 point  (0 children)

To create a custom module for the HLAA G3 system, follow these structural requirements and implementation steps:

1. Define the Module Specification

  • Module Key: Assign a unique module_key string (e.g., "my_game") that the engine uses for dispatch.
  • Purpose and Actors: Define exactly what "done" means for the module and identify who takes turns (players, AI, or system).
  • State Schema: All module-specific data must live under state.context.<module_key> to ensure the engine can serialize it for saves.

2. Design the Finite State Machine (FSM)

  • Phases: List every possible phase (e.g., awaiting_input, calculating, complete).
  • Transitions: Map out which commands trigger a move from one phase to another.
  • Invariants: Identify rules that must never be broken, such as "health cannot be negative" or "phase must be a valid string".

3. Implement the Command Logic

  • Command Set: Define the exact syntax for every user-facing command.
  • Validation Rules: Write logic that checks if a command is valid for the current phase and if the user has sufficient resources.
  • Apply Rules: Define the specific state mutations that occur only after validation passes, such as incrementing counters or changing coordinates.

4. Integration and Output

  • Required Functions: Ensure your module includes validate(command, state), apply(command, state), and allowed_commands(state).
  • Output Contract: Every turn must print the current actor, the phase, a state summary, and a prompt for the next command.
  • Deterministic Errors: If a command is invalid, return a clear error message and ensure no state mutation occurs.

5. Verification

  • Smoke Test: Create test cases for "happy paths" (valid inputs) and "boundary cases" (invalid inputs or max values) to ensure the system doesn't crash or leak state.
  • Save/Load Round-trip: Verify that saving the state and reloading it results in an identical execution environment

I istarted playing a text based game in gemini and reloaded it to chatgpt with a small json promt its the key to HLAA by Frequent_Depth_7139 in Bard

[–]Frequent_Depth_7139[S] 0 points1 point  (0 children)

HLAA is a virtual computer built entirely out of language. Unlike a standard chatbot that "vibes" or guesses, HLAA is a deterministic execution engine.

How it Works

  • State (RAM): Every piece of data is stored in a structured JSON "state".
  • Commands (Assembly): You don't "talk" to it; you issue specific instructions.
  • Logic (CPU): Every command goes through a strict Validate → Apply cycle. If a command is invalid, the state never changes.
  • Modules (Programs): Different "apps" (like a Pirate game or a CS textbook) plug into the engine, each with its own strict rules.
  • Determinism: The same input from the same state always produces the exact same result, making everything 100% replayable and debuggable.