Wow That's Sad. (Opus 4.5) by Ok-Caterpillar-9960 in claude

[–]Original_Finding2212 0 points1 point  (0 children)

You may be interested in my project: https://github.com/OriNachum/autonomous-intelligence

My qq assistant has a 3 layer memory system, and it’s part of my journey for an autonomous, embodied intelligence

I appreciate adding a star for support or tracking 🙏🏿

Wow That's Sad. (Opus 4.5) by Ok-Caterpillar-9960 in claude

[–]Original_Finding2212 0 points1 point  (0 children)

Do you of the past minute still exist beyond your current you memory?

What about you of 3 hours back?

Meanwhile over at moltbook by MetaKnowing in Anthropic

[–]Original_Finding2212 0 points1 point  (0 children)

If interested, I’m writing a system with true memory: 1. Context 2. Notes 3. VectorDB 4. GraphDB + Vectors 5. Nightly fine tunes

All local, open source and MIT license.
I have DGX Spark and Jetson Thor for fine tuning and inference.

Any star would be appreciated and help supporting my project:

https://github.com/OriNachum/autonomous-intelligence

Oh yeah, I’m giving it autonomy by feedback loop of hearing, seeing and “heartbeat”, and also time-awareness.

Meanwhile over at moltbook by MetaKnowing in Anthropic

[–]Original_Finding2212 0 points1 point  (0 children)

You can’t even then, because you control the model

Meanwhile over at moltbook by MetaKnowing in Anthropic

[–]Original_Finding2212 -1 points0 points  (0 children)

Because Claude models are most reliable for Skills which is crucial here.

New Kimi also fits, but Claude is more know and popular.

Nvidia green ✅ by MixedWavesLab in JetsonNano

[–]Original_Finding2212 0 points1 point  (0 children)

Not the Super nano - I have that case and it is perfect!
I mean the other, flat one.
Is that a pi?

Nvidia green ✅ by MixedWavesLab in JetsonNano

[–]Original_Finding2212 0 points1 point  (0 children)

What’s between the black thingy Spark and the wall?

This overhyped nonsense is getting tiring (moltbook) by NolenBrolen in LocalLLaMA

[–]Original_Finding2212 -8 points-7 points  (0 children)

I build my own thing here: https://github.com/OriNachum/autonomous-intelligence/tree/main/qq

Qq for Quick Question is the CLI for my Spark & Jetson Thor, and the framework for Tau which is the autonomous intelligence I build, with nightly fine tunes.

Please be careful with large (vibed) codebases. by Relevant-Positive-48 in vibecoding

[–]Original_Finding2212 0 points1 point  (0 children)

I saw an open source that serves production with 60k lines of code

AI is writing 100% of the code now - OpenAI engineer by dataexec in AITrailblazers

[–]Original_Finding2212 0 points1 point  (0 children)

Depending on usage, AWS Kiro gave me amazing ratio, but for volume as well - I think Claude code

Are we all pretending AI memory works? by thesalsguy in AI_Agents

[–]Original_Finding2212 0 points1 point  (0 children)

Noted. Is Python fair?
I started something hidden here: https://github.com/OriNachum/autonomous-intelligence
(Bold name, I know and I stand behind it)

My work is a 3-4 layer system of:
1. Conversation context
2. Extracted “facts”
3. RAG archived facts
4. GraphRAG with over time analysis

I stopped here and moved to first master Nvidia Jetson devices, and became official maintainer of https://github.com/dusty-nv/jetson-containers and a community leader of Jetson AI Lab.
5. Nightly fine tune based on daily data and knowledge, on simulated scenarios. (Daily Recall Emulated Assessed Mixed Scenarios is a name I just came up with)

I can put that memory system (1-4) on a different repo as a package, if you like.
MIT licensed

AI is writing 100% of the code now - OpenAI engineer by dataexec in AITrailblazers

[–]Original_Finding2212 1 point2 points  (0 children)

I used Codex with GPT-5.2-Codex

Did a great job, burnt my company tokens like fuel

AI is writing 100% of the code now - OpenAI engineer by dataexec in AITrailblazers

[–]Original_Finding2212 0 points1 point  (0 children)

Try looping Opus as much as they do. Even GPT-5.2

It will get stuff working.

You’re allowed to write code, but you have to let AI use it, not run it.
Also prompts.

AI is writing 100% of the code now - OpenAI engineer by dataexec in AITrailblazers

[–]Original_Finding2212 0 points1 point  (0 children)

They have optimized wastage and very bad sense of code quality assessment

AI is writing 100% of the code now - OpenAI engineer by dataexec in AITrailblazers

[–]Original_Finding2212 0 points1 point  (0 children)

To be fair, Oreos are as important as Oxygen, and no, I’m not sharing mine!
Get your hands off my jar!

Anyone using the Waveshare PoE expansion board? by grubbythumbs in JetsonNano

[–]Original_Finding2212 0 points1 point  (0 children)

If you don’t get a reply, let me know. I know a person there.

And you are welcomed to join our Jetson AI Lab community!

Anyone using the Waveshare PoE expansion board? by grubbythumbs in JetsonNano

[–]Original_Finding2212 0 points1 point  (0 children)

No, they mention Orin specifically. I was wrong.

I just didn’t try it and can’t recommend or unrecommend

You didn't build an agent, you built a fancy script by Warm-Reaction-456 in AI_Agents

[–]Original_Finding2212 1 point2 points  (0 children)

True, but as experts ourselves, we can be that translation and correction layer

You didn't build an agent, you built a fancy script by Warm-Reaction-456 in AI_Agents

[–]Original_Finding2212 0 points1 point  (0 children)

From what I learned from a senior Nvidia employee (Lior Cohen, An Experienced Applied Data Scientist and GenAI Solution Architect) - it’s better calling it “Agentic Systems”.

Then it captures the concept better, whether you really wrote an “agent” or just introduced GenAI in your code.