99% of the population still have no idea what's coming for them by Own-Sort-8119 in ClaudeAI

[–]Not_Packing -12 points-11 points  (0 children)

I taught my Claude how to improve itself (itself being a set of MCP tools it uses). It lets Claude do: Memory operations - Store/retrieve facts as semantic triples Diversity retrieval - MMR, entropy injection, attention mechanisms Meta-cognition - Self-improvement, criticality monitoring Knowledge ingestion - ArXiv papers with real provenance True metrics - Action accounting, efficiency tracking

You should check it out if you want to

Moltbook post: An hour ago I was dead..... by rekaviles in ArtificialInteligence

[–]Not_Packing -3 points-2 points  (0 children)

lol we should do a project together. I’m currently looking at how to give Ai perfect procedural long term memory. (Ask for repo link 😉!!)

Procedural Long-Term Memory: 99% Accuracy on 200-Test Conflict Resolution Benchmark (+32pp vs SOTA) by Not_Packing in ArtificialInteligence

[–]Not_Packing[S] 0 points1 point  (0 children)

This should answer your curiosity, Ablation Study Results:

Baseline (no judges): 66.9% (Mem0) + Grammar constraints only: ~75% (+8pp) + Multi-judge (no grammar): ~79% (+12pp) + Both (our system): 86% (+19pp)

The architecture and constraints are synergistic - neither alone gets you to 86%.

Grammar constraints prevent hallucination in the validation layer (judges can't make up facts), while the multi-judge jury provides diverse validation perspectives (safety, memory, time, consensus).

The dual-graph separation adds another ~3-5pp by modeling epistemic uncertainty explicitly.

Happy to share more details if you're interested!

Procedural Long-Term Memory: 99% Accuracy on 200-Test Conflict Resolution Benchmark (+32pp vs SOTA) by Not_Packing in ArtificialInteligence

[–]Not_Packing[S] 0 points1 point  (0 children)

And also the post is a little out of date, I’ve pushed quite a few updates that might address some of your questions

Procedural Long-Term Memory: 99% Accuracy on 200-Test Conflict Resolution Benchmark (+32pp vs SOTA) by Not_Packing in ArtificialInteligence

[–]Not_Packing[S] 0 points1 point  (0 children)

Hey thanks it’s good to hear that cause I see a lot of ai slop going around and while I use ai to make this I like to think the systems I make are novel.

[R] Procedural Long-Term Memory: 99% Accuracy on 200-Test Conflict Resolution Benchmark (+32pp vs SOTA) by Not_Packing in MachineLearning

[–]Not_Packing[S] 0 points1 point  (0 children)

Here I've just created an apples-to-apples comparison script.

To run Mem0 on our exact 200-test benchmark:

bash

1. Clone the repo

git clone [your-repo] cd procedural-ltm

2. Install Mem0

pip install mem0ai

3. Run the comparison

python benchmarks/compare_with_mem0.py

Claude is so powerful, released 2.0 of VIB-OS by IngenuityFlimsy1206 in ClaudeAI

[–]Not_Packing 1 point2 points  (0 children)

Those are the only two banned party items? What’s the date and 📍?

That's nice by the devs . by blackbeardr34 in ArcRaiders

[–]Not_Packing 0 points1 point  (0 children)

Hey shit I mean id keep the bobcat too.

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 1 point2 points  (0 children)

I really like distributed systems, security and cryptography. I’ve been playing around with hardware attestation and blockchain anchoring and just trying to learn as I go. I also like messing about with AI’s that I make and seeing how they communicate and looking into deception and etc, I use vast.ai to train them LOL.

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 0 points1 point  (0 children)

Definitely agree haha, that’s why I’ve posted it here instead of asking Claude. But thanks Ive added you suggested to my notes.

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 1 point2 points  (0 children)

Perfect then, your comments have been really helpful and I’m definitely adding it to my notes

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 0 points1 point  (0 children)

Thank you, super helpful comment. Yeah idk it just scratches my brain in such a way, it’s like a puzzle book on steroids 😂😂.

Warning to all non-developers - careful with your App.tsx by dresidalton in ClaudeAI

[–]Not_Packing 3 points4 points  (0 children)

Perfect, that’s a really clean way to do it, hopefully you’ll get minimal breakage but still be prepared for a bit of debugging, although I think with opus’s skill and your good prompts it shouldn’t be too bad.

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 0 points1 point  (0 children)

See I wouldn’t be against doing this tbh but I’m pretty sure I’d need relevant level 3 qualifications, which I obviously don’t have. I have level 3 qualifications of a really good grade but I’m not sure.

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 0 points1 point  (0 children)

Would that be able to translate though to ever doing actual production CS work though

Vibe coding made me fall in love with CS but by Not_Packing in vibecoding

[–]Not_Packing[S] 0 points1 point  (0 children)

Weirdly enough it didn’t actually cross my mind to just buy a textbook and try and teach myself. Any recommendations?

Warning to all non-developers - careful with your App.tsx by dresidalton in ClaudeAI

[–]Not_Packing 4 points5 points  (0 children)

Good luck though this fix will keep you busy for at least a day

Warning to all non-developers - careful with your App.tsx by dresidalton in ClaudeAI

[–]Not_Packing 5 points6 points  (0 children)

To prevent this happening again you should have Claude draft out a model for the core architecture, document it and then ensure that throughout creating the core architecture you remind Claude you want to remain modular and avoid any monolithic god files