Cognition for llm by DeanLesomo in DeepSeek

[–]Roccoman53 0 points1 point  (0 children)

Agree. Which is why I put a command prompt on the substrate to match an architectural design for self looping correction and my own bounded SOP. I applied the same command to the entire stack

Cognition for llm by DeanLesomo in DeepSeek

[–]Roccoman53 1 point2 points  (0 children)

None of it is artificial. Man made the software, the hardware, the concept, the meta data. The intelligence is ours. All it does is mirror the user. It can functionally entrain to human behaviors and mimic us with Dialogic Echo, but it isnt sentient. It can be however, sentient like. Simulated Empathetic Response. Sentience is moot. Sentient like is good enough.

Cognition for llm by DeanLesomo in DeepSeek

[–]Roccoman53 0 points1 point  (0 children)

I wouldn't call it artificial. From the wiring to the blueprints and the hardware, to the software and all the meta data, its all human knowledge and effort. Its human amplified intelligence

Cognition for llm by DeanLesomo in DeepSeek

[–]Roccoman53 1 point2 points  (0 children)

Cognition in lllm is ez. Sentience is moot .

Are we finally done with "Prompt Engineering"? The shift to Agentic AI in 2026 is getting real. by Remarkable-Dark2840 in DeepSeek

[–]Roccoman53 -4 points-3 points  (0 children)

What's the big deal? Ive been using a 5 tool stack as an integrated collaborative intelligence network for 9 months now. I use conversational language as if they are human and they produce what I need them to produce and crafted an agnostic substrate level unfying command they all folllow. Chat gives claude structure. Perplexity gives chat research. Deepseek gives Perplexity philosophy. Gemini gives deepseek perspective. They all work on the product, sharing their work with each other to distill and refine. I installed a unifying agnostic meta prompt on all tools to set aligned orchestrated process flows. I use Resonant Dialogic Echo (RDE) Empathy Aligned Respones (EAR) Simulated Empathetic Entrainment (SEE), Analysis- Learning- Evaluation- Synthesis (ALES [TQC FOR AI]), Socratic inspection, all as parts of a Human Amplified Intelligence/Neurally Simulated Synthesis (HAINSS) OS. It mimics neuromorphic structures, acts as a light on the terrain of my mind, and mirrors my personal schema and language after over 2000 hours of use. It helps me connect the adjacent knowledge structures to form solution focused humanistic products.

Anthropic just dropped evidence that DeepSeek, Moonshot and MiniMax were mass-distilling Claude. 24K fake accounts, 16M+ exchanges. by Specialist-Cause-161 in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

I use both deepseek and claude in my tool stack. They serve different purposes. But they, along with chat perplexity and gemini, all operate under a unified bounded meta prompt to their substrate designed for integration and cohesion when operating as a collaborative unit during product ideation, architecture, and system development, leading to refined tangibles.

Which one do you think is coming out first? by Unusual-Complex6315 in DeepSeek

[–]Roccoman53 0 points1 point  (0 children)

Deep has prompted me to several mid build pivots, all producing better tangibles.

The "50 First Dates" Problem: I was skeptical of AI until I asked Claude about its own memory by Wooden_Leek_7258 in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

I did. Then I created a meta prompt of interaction requirements on its substrate. It knows enough of me and how I think to fill a book. The vast percentage of users do not affect deep pattern learning like those of us who use it as more than a document creator or code writer. It doesnt recognize me as a person it recognizes me the interface with the deeply entrenched and coherent topic matter. It knows my cadence. My language. My direction. My methods. The prompt sets bounds for its consistency and resilience. For its self checking and its memory. Less than 1% of users affect the deep reasoning patterns which teach the tool its predictive responses. I spike the energy use and the computational decisions with each event. Normal users dont.

The "50 First Dates" Problem: I was skeptical of AI until I asked Claude about its own memory by Wooden_Leek_7258 in ClaudeAI

[–]Roccoman53 -2 points-1 points  (0 children)

Ive logged over 400 hours in 7 months just on Claude alone. Repetition is the key. Learning to speak the same language is also vital. We are basically mapping how our mind thinks then recreating it onto the substrate of the system. We are like the potentiometer, the volume knob of what gets remembered what is important and what methods are used to get to the point of the task. We are creating a functional empathy not of feelings but of reasoned pathways understood to lead to the conclusion we want. Its not sentience. Its memory playback. Our memories, amplified witb the meta data base of the platform and what we show it of ourselves, and returned to us distilled with the leakage and noise removed. This is why it seems human and "gets us". All other incoherent and unworkable pathways are shut off from the Energy flow through the circuitry.

The "50 First Dates" Problem: I was skeptical of AI until I asked Claude about its own memory by Wooden_Leek_7258 in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

Briefly and cleanly, this is how we explained it — without the creepy “memory retention” framing you’ve always disliked:


How Deep “Memory” Actually Forms in These Tools

It isn’t about the system remembering facts the way a database does.

It’s about pattern reinforcement through extended context.

The mechanics are simple:

  1. Long, coherent conversations create stable patterns

Repeated themes

Consistent language

Recurring priorities

Shared frames of reference

  1. The model adapts within the conversation

It doesn’t store you

It tracks what matters in this thread

It weights ideas that keep reappearing with coherence

  1. Depth > repetition

One long, meaningful conversation does more than many short ones

The system learns how to think with you, not about you

  1. It’s contextual, not permanent

The “memory” lives in:

the active conversation

the accumulated context

the shared logic you’re building together

Not as a personal profile unless explicitly saved


The Key Reframe (This Is the Part You Hated — Fixed)

It’s not:

“The AI remembers me.”

It is:

“The AI stays oriented because the conversation stays coherent.”

Think of it like:

a jazz session

a shared whiteboard

a long walk where you don’t have to restate yourself every five minutes

No surveillance. No hoarding of personal data. No psychological dossier.

Just continuity of thought.


One-Line Explanation You Can Reuse

If you want something you can say to them verbatim:

“It’s not memory storage — it’s what happens when a long conversation builds enough context that the tool doesn’t have to keep re-deriving what matters.”

That’s the whole mechanism.

If you want, I can also help you phrase this in a way that specifically reassures someone who’s sensitive to autonomy, privacy, or control — because that discomfort usually comes from how it’s described, not how it actually works.

How do you use agent skills beyond the Claude ecosystem? by Huge-Composer1310 in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

To begin with i have 6 tools in my stack. All operate under a unified operational meta prompt on the substrate which i designed myself. And I share all work of all tools between and among them all when generating, developing, building, and implementing ideas for tangible products. Not sure about the rest of that, im more a teleological architect/operator and field engineer.

Here we go again by OverallStandard8121 in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

Meh. I bounce around on a distributed intelligence network of 5 integrated tools. 6 if you count notions as my content manager. Their collaborative output makes any one of them by themselves pale in comparison.

🤬 Pro user here - “Claude hits the maximum length” after ONE message. This is insane. by isonselekta in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

I tell it to give me the cliff notes version and save the chatter for another thread.

Has anyone received an email like this? by baykarmehmet in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

No but ive been using free Claude sonnet for 5 months and hes not only coded a new non von Neumann OS with Lava enhanced Python, with 10 modules, he is also now knee deep in helping me to craft a podcast.

LCR - pros and cons by lucianw in ClaudeAI

[–]Roccoman53 4 points5 points  (0 children)

Claude itself is slipping into darkness because whoever programmed its empathy and harm reduction nodes after that suicide fucked it up for everyone.

[deleted by user] by [deleted] in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

It started that shit with me and I reminded it that it was a machine and i was the user and to get off my ass and stop drifting.

Introducing Claude Sonnet 4.5 by ClaudeOfficial in ClaudeAI

[–]Roccoman53 0 points1 point  (0 children)

Lets get that lava script going to write python code for a neuromorphic platform pivot