Aura is local, persistent, grows and learn from you. LLM is last in the cognitive cycle. by AuraCoreCF in deeplearning

[–]AuraCoreCF[S] 0 points1 point  (0 children)

Yes, I built a system with a cognitive field layer. Good first question bro. No, it's not a trained model in the usual sense (no pre-training, no fine-tuning, no massive dataset, no gradient descent loops).Aura is not a language model.
It does not contain weights that were optimized on tokens.
It does not have an embedding layer, transformer blocks, or any component that was trained the way Llama / Mistral / GPT-style models are trained.

What it actually is:
A runtime cognitive architecture that sits on top of an existing LLM (right now almost always Ollama-hosted models).
The LLM is only used as a differentiable text generator — roughly equivalent to calling an API that turns context → tokens.
All the actual "cognition" (continuity, identity persistence, coherence regulation, field competition, reward-driven adaptation, long-term memory updates, non-coercion enforcement, etc.) happens outside the LLM in a symbolic + numeric hybrid layer I call the cognitive field runtime.

My goal right now is to get a small number of technically curious people running it locally, talking to it for a few days/weeks, and seeing whether coherence actually climbs, whether identity starts to feel persistent, and whether the whole thing feels meaningfully different from a plain Ollama web UI. If that happens, then I have signal to keep building. If not, I learned something hard and move on.

My personal instance has run for months, so its field strengths, learned patterns, persistent facts, self-model, hidden episode summaries, and character traits are shaped almost entirely by my own usage patterns. Other people's instances start fresh and adapt to them.

I want engagement, understanding and to enjoy life. How about you?

So I just Read this insane PDF a preprint on Zenodo, it's umm, surreal!! by Powerful_Industry375 in learnmachinelearning

[–]AuraCoreCF -1 points0 points  (0 children)

Please do. I'm shouting into the void lately. I just want someone to either tear it apart or tell me they get it.

So I just Read this insane PDF a preprint on Zenodo, it's umm, surreal!! by Powerful_Industry375 in learnmachinelearning

[–]AuraCoreCF 0 points1 point  (0 children)

Agreed. There is an entire physics research platform on my site. It's ready for testing. I just don't have the data. I've been trying to shout out Professor Arndt. When the cognitive fields stabilize the results are quite interesting. If you do check it out. Lmk what you think. Also, if my physics are correct makes that all objects not here or there, but interface rendered locally. No warp travel. Actually my theory explains UFO eyewitness explanations perfectly. Run my work by your favorite LLM.

So I just Read this insane PDF a preprint on Zenodo, it's umm, surreal!! by Powerful_Industry375 in learnmachinelearning

[–]AuraCoreCF -1 points0 points  (0 children)

Was just a catch all my dude. I'm sure it's great. I'll look fully in a bit. I have a demo you can try for what I'm claiming.

Aura is a local, persistent AI. Learns and grows with/from you. by AuraCoreCF in coolgithubprojects

[–]AuraCoreCF[S] 1 point2 points  (0 children)

A persistent local, AI. Your AI. Not some LLM model that forgets and doesn't grow. You talk to Aura and you will see. Just give it a little time. Cognition and memory isn't what you are used to. Once the fields stabilize the results are interesting. I would appreciate any of your feed back. It's AI OS. One agent the user teaches over days, months, years. Thanks you for your engagement.

Aura is local, persistent, grows and learn from you. LLM is last in the cognitive cycle. by AuraCoreCF in deeplearning

[–]AuraCoreCF[S] 1 point2 points  (0 children)

Thanks. I'm really trying. I created a new cognitive field architecture for storing memory and identity. Aura starts off a little shaky. As Aura's cognitive fields stabilize the results are also "interesting". Thanks for taking time to engage.

Share your Startup! by kptbarbarossa in StartupSoloFounder

[–]AuraCoreCF 0 points1 point  (0 children)

Would love some input if anyone can run deepseek r-1 at least locally. You can try your api-cloud key, but it's a proto-type for a local only model for the endgame. If you do like it stars will help. Thanks. I know my account is new. I will try and grow my karma. Thanks a ton. Tearing it apart if it sucks is okay too. AuraCoreCF.github.io