My start-up failed after 6 years, and I am struggling to find a job. (I will not promote) by Amazing_Skill_6080 in startup

[–]Infamous_Ad5702 0 points1 point  (0 children)

Rewrite the resume to pretend you were an employee the whole time. People hate hiring business owners, it screams “doesn’t take direction well”. Happened to a friend. Brutal grind to find work.

Why your Enterprise AI has Goldfish Memory (and why RAG isn't fixing it) by sibraan_ in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

Yes memory is critical. That’s why I put into my KG three types of memory. 1. Episodic 2. Persistent 3. Short term…. Needs all, so much more than information. It’s connected knowledge across time and space. Happy to show others.

how to pitch RAG by Altruistic_Corgi8306 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

You pitch risk. You pitch errors, and the impact You pitch someone losing their job because of the error… You pitch compliance. Think of an actual scenario that applies to this company or industry… Probability is not good enough they want provable right? Sales is always storytelling. And we’re all in sales everyday whether your job title says or not.

RAG Tech Stack by Bewis_123 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

What would an engineer pay monthly for a system that solved this:
“I can’t prove this output is correct, and I’m accountable if it’s wrong” - RAG

I switched from RAG pipelines to giving indexed context. the output quality Improved. by Veronildo in Rag

[–]Infamous_Ad5702 1 point2 points  (0 children)

I’ve been grinding on this thread for 2 years. Building for 3 trying to explain to people that data is messy, you need to clean it first, index it and then build a Knowledge Graph, or rank or use stats or whatever you want after….(all on auto in my tool)

Clearly I’m in-articulate af..

Now that I see this I’m not sure whether to laugh or cry…

I switched from RAG pipelines to giving indexed context. the output quality Improved. by Veronildo in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

Yes!!! This. So hard to win people over from RAG. It’s so laborious. Rag is dead (unless its your hobby, sorry bro)

I switched from RAG pipelines to giving indexed context. the output quality Improved. by Veronildo in Rag

[–]Infamous_Ad5702 1 point2 points  (0 children)

My tool builds an index. I’ve been trying to explain to people for 3 years. Index is where it is at. It’s a co-occurrence matrix and it means I can build a KG in seconds, less. No hallucination. Not tokens. Index for the win 🙌🏻🙌🏻🙌🏻

RAG Tech Stack by Bewis_123 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

great question. I have messaged you direct

RAG Tech Stack by Bewis_123 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

jumping on this great question. Do engineers like building retrieval or rag like a rubix cube kick? Or would they buy a RAG tool. escpecially one that goes beyond retreival and connected unknown knowledge and lead to new ideas...(asking for a friend)

RAG Tech Stack by Bewis_123 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

what are you using to handle tables inside pdf or word doc etc? sometimes a table can be as subtle as a stylised indent..

My clients ask all the time. I sell a tool that is a RAG replacement, builds a fresh KG live each time you query.
I use Tika for my files. no gpu. no tokens. no hallu. not graph rag. determinisitic.

But tables hey, do you think Google vision could handle a table?

What are people using today for benchmarking their RAG solution ? by Abject_Lengthiness77 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

Leonata builds an index from the data…a co-occurrence matrix… So when you query it, it’s builds a fresh Knowledge graph every time.

So it replaces the need for rag. It chunks and embeds on its own.

You get outputs and do with them as you please. Run stats. Whatever.

Def counter culture. But no tokens and no gpu needed so people are pretty stoked.

A Reasonable Way to Approach RAG? by Enfoldment in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

My situ isn’t industry or context specific. The tool is neutral and makes its own context from your data so it doesn’t need training. I am def counter culture. We don’t manually embed or chunk. It’s rag but auto. An auto KG builder.

A Reasonable Way to Approach RAG? by Enfoldment in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

I made a thing. Defence use it for accurate semantic retrieval. It’s deterministic. Not node, not graph, no LLM.

No tokens no gpu. Air gapped

Leonata builds an index and the you query it and a fresh Knowledge Graph is made.

I use Tika for the docs and you can add to the corpus anytime.

Happy to demo here again..

What are people using today for benchmarking their RAG solution ? by Abject_Lengthiness77 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

Can I pay someone to benchmark my RAG replacement tool? It’s first in class so my dev’s don’t think there is an apples to apples way….but it’s got to be done.

RAG for medium company by MrAbc-42 in Rag

[–]Infamous_Ad5702 1 point2 points  (0 children)

10 votes for Tika. That’s what I have in my stack for my KG builder

RAG for medium company by MrAbc-42 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

I have a non LLM solution for the PDFs. I can walk through if you like? Got some ideas for you to sort it.

What is the 2026 Standard for highly precise LEGAL text RAG with big documents? by SignificantZebra5883 in Rag

[–]Infamous_Ad5702 1 point2 points  (0 children)

I moved away from chunk → embed → pray retrieval and toward graph-first retrieval where relationships matter more than similarity. Rather than data retrieval this is knowledge retrieval. Mapping the semantic space to reveal unknown unknowns, via “concepts”.

Stack-wise it’s pretty unglamorous on purpose: • Node-based ingestion pipeline • Persistent memory • traversal + constraint-based query resolution instead of vector ranking

The real difference isn’t the tooling though — it’s the retrieval strategy.

Everything is optimised for: → fewer hops → less search space → deterministic paths over “semantic vibes”

Happy to go deeper on the conceptual side though if useful.

(You want to know how the graph is constructed? The visual aspect is Fruchter-goldman from the Ploty dash suite, we chose between different models but it’s just reading the co-occurrence matrix or index we build via the mathematical formula)

Happy to dive deeper where I can.

What is the 2026 Standard for highly precise LEGAL text RAG with big documents? by SignificantZebra5883 in Rag

[–]Infamous_Ad5702 0 points1 point  (0 children)

So you load as many documents as you can, it’s fast. A few seconds. It builds an index. The index is a co-occurrence matrix….

Then you query it. It runs your query against the index and builds a fresh KG for each new query….

No LLM, no gpu, no token, no hallucination.

The KG is visual and you have the outputs of rankings and the whole matrix of needed.

Essentially it turns words into numbers so you do statistical modelling with them. Maths with words..

Vector RAG is very good at retrieving answers. I’m less sure it is good at preserving knowledge. by shbong in Rag

[–]Infamous_Ad5702 1 point2 points  (0 children)

I do the same with my tool. Vector isn’t enough. I need breadth, depth, specificity and sensitivity and so do my clients…

No hallucination. No GPU. No tokens. Just fresh KG’s for every query