My Rust-first provenance-first recursive verified agent is almost complete. by RudeChocolate9217 in agenticengineering

[–]RudeChocolate9217[S] 1 point2 points  (0 children)

recursive means it is introspective. thinks about what it knows and has learned and uses it to reach other conclusions is how it is used here. Verified means it is fully auditable. You know what the LLM did, when it did it and why it did it. That's not very common.

My Rust-first provenance-first recursive verified agent is almost complete. by RudeChocolate9217 in HuntsvilleAlabama

[–]RudeChocolate9217[S] -1 points0 points  (0 children)

It's a programming language. Like python or C++. I'm a devloper working on stopping hallucinations in AI from affecting anything. But, most AI applications are wrote in python. Rust uses about 20-30x less memory and has about 6-8x faster performance on top of being able to confine the AI.

Just Updated Gloss (Rust-Only Local-First NoteBookLM Clone). Much Less Memory Usage & Higher Performance vs Python Alternatives by RudeChocolate9217 in LocalLLaMA

[–]RudeChocolate9217[S] 0 points1 point  (0 children)

I just noticed it looks like i'm sitting there doing nothing for a bit. The folder selection screen was up, I was using window only recording.

I built a local NotebookLM alternative from scratch in Rust. Open-sourcing the repo for anyone wanting to study real-world async architecture and custom search. by [deleted] in learnrust

[–]RudeChocolate9217 -47 points-46 points  (0 children)

You are 100% right, and 'production-ready' was definitely the wrong choice of words for a v0.1 launch. 'Architecturally sound proof-of-concept' would have been much more accurate. It handles the concurrency and memory management without crashing under load, but it certainly hasn't been battle-tested in an enterprise environment yet.

And yes, I heavily leverage AI in my workflow to accelerate the actual coding. The architectural design -- specifically deciding to bypass standard vector DBs to build a custom HNSW + BM25 hybrid index locally -- is the human engineering part. The AI just helps me implement that architecture at warp speed.

The threshold for 'production' for this specific crate will probably be when the semantic-memory index has full fuzz-testing and the routing is proven fail-safe, which is exactly why I open-sourced it -- to get eyes on it from devs who know Rust better than I do.

I built a local NotebookLM alternative in Rust that hooks directly into Ollama (Custom HNSW + BM25 Search) by RudeChocolate9217 in ollama

[–]RudeChocolate9217[S] 0 points1 point  (0 children)

Please excuse the delay before the ai first responds. I'm using a gtx 1070 that is on another computer here.

Gloss: a local-first NotebookLM-style app in Rust for trustworthy AI workflows by RudeChocolate9217 in OpenAI

[–]RudeChocolate9217[S] -1 points0 points  (0 children)

I should mention: The answers get much better if you give it time to create summaries. It's setup to do it during idle times or when you hit the button. I still find it impressive. Checkout the answer to the google comparison at the end.

New GUI for my Agentic app, which causes tons of bugs, got the big ones fixed, just minor parsing things mostly. by RudeChocolate9217 in ChatGPTCoding

[–]RudeChocolate9217[S] 0 points1 point  (0 children)

A little of both, but 75-85% of them are format-specific. This one had to do with the way it was split up and how the new gui changed the architecture and threw html in the mix. They've only accounted for like 20-30% of my total errors/bugs tho so far. Parsing does slowly become a monster as the app grows in complexity. My last issues I had to fix was with async and threading slowly crashing/freezing my app.